If there's one thing that irritates me about the pg-led Lisp renaissance it's the fascination with the Ur-Lisp. Rewriting that Ur-Lisp has become a pastime. Here are just some of the reasons I think that's dumb:
Lisp is a bad model if you want some kind of axioms of computing
We already have the lambda calculus for that, and you can build a real language with that (see Haskell). If you want to write some minimal thingy in C, consider writing a Haskell, not a Lisp.
That you can write a Lisp evaluator in Lisp was interesting in 1959 (and maybe 1960), and let's face it, that Ur-Lisp was one broken language.
Car, cdr, wtf?
Contents of the address part of register, contents of the decrement part of register, what the fuck? What are they doing in a language, even a toy, written in 2010? Please use data structures.
Where are the closures?
If you're looking for a challenge, as opposed to redoing something that's been done ad meam nauseam for half a century fachrissakes, try to find a minimal and explainable way to do closures. Bonus points for efficient flat environments. Languages without closures are so 1959.
Where are the macros?
A Lisp without macros is meaningless.
Where's the control flow?
A Lisp without some kind of continuations and condition system is useless.
Where are the semantics?
If you read the RnRS attentively, you'll see that Lisp has evolved a deep and subtle set of semantic concepts, none of which feature in the Ur-Lisp.
What's the purpose?
Surely you're not learning much by repeating JMC's flawed Ur-Lisp from 1959. If you want to learn something, implement a language with closures and macros. If you want to learn more, make it a compiler. If you want to blow your head, implement hygienic macros or a higher-order module system or a static type checker. That's today's standard.
Look, Lisp is such a great language, but if anything we have to push it harder, not continuously go back to 1959.
Update: More fully-featured, modern Lisps, pulleezz
Wednesday, August 25, 2010
Saturday, August 21, 2010
Dalvik DEX
Good binary formats rock. Reading the ELF spec was quite eye-opening for me.
Dalvik's DEX format is another nice one.
Interestingly, Dalvik is the brainchild of Dan Bornstein, who was at Kaleida Labs (RIP), where he worked on ScriptX, one of the many avant-garde codes that fell victim to the WWW-induced ice age of GUI innovation.
Dalvik's DEX format is another nice one.
Interestingly, Dalvik is the brainchild of Dan Bornstein, who was at Kaleida Labs (RIP), where he worked on ScriptX, one of the many avant-garde codes that fell victim to the WWW-induced ice age of GUI innovation.
Friday, August 20, 2010
The 2010 Linux Storage and Filesystem Summit
As usual, Jonathan Corbet does an admirable job of informing us on happenings in kernel land in his summaries of The 2010 Linux Storage and Filesystem Summit, day 1 and day 2.
On testing:
On testing:
It was suggested that the real test should be "put the new code on the Google cluster and see if the Internet breaks."
On Google:
There are two fundamental types of workload at Google. "Shared" workloads work like classic mainframe batch jobs, contending for resources while the system tries to isolate them from each other. "Dedicated workloads" are the ones which actually make money for Google - indexing, searching, and such - and are very sensitive to performance degradation. ...
The workloads exhibit a lot of big, sequential writes and smaller, random reads. Disk I/O latencies matter a lot for dedicated workloads; 15ms latencies can cause phone calls to the development group.
Thursday, August 19, 2010
Meta: Blogging is difficult
The Axis of Eval is my second blog, after the venerable plans within plans within plans. (Yeah, there were others.)
Both of these blogs have gone through the same stages:
I'd like to keep a certain entertaining and informative niveau, and in the best case, I'd also like to improve it. In the early days, aggressive diss-posts and funny flames flow freely, because the audience is small and trusted. And those posts are entertaining. But in a more public setting, I have a bit of a bad feeling writing them, because I feel they may harm people I write about, when all I'm intending is to vent about some ideas I think are bad or ridiculous, or would like to tell a stupid joke.
So, what I want to say is that there are some difficulties to blogging that are seldom written about, and I'm still trying to figure out the boundaries of this strange new thingy, and where to draw the line between fact and fiction in blogging.
--Manuel
Both of these blogs have gone through the same stages:
- A few days of writing in solitude with a couple of friends.
- Chris Neukirchen mentions the blog on Anarchaia or Trivium.
- Al3x twitters about it.
- Attack of the unwashed HN masses (just kidding).
I'd like to keep a certain entertaining and informative niveau, and in the best case, I'd also like to improve it. In the early days, aggressive diss-posts and funny flames flow freely, because the audience is small and trusted. And those posts are entertaining. But in a more public setting, I have a bit of a bad feeling writing them, because I feel they may harm people I write about, when all I'm intending is to vent about some ideas I think are bad or ridiculous, or would like to tell a stupid joke.
So, what I want to say is that there are some difficulties to blogging that are seldom written about, and I'm still trying to figure out the boundaries of this strange new thingy, and where to draw the line between fact and fiction in blogging.
--Manuel
Monday, August 16, 2010
No Paranoia Rule
Good rules are few and far between in the programming scene.
The first time I heard of what I now call the No Paranoia Rule was in the following comment by Luke Gorrie on LtU:
But that's where the No Paranoia Rule comes into play. Stop being paranoid, and don't discount language features for their potentially devastating effect.
As Luke further states,
The first time I heard of what I now call the No Paranoia Rule was in the following comment by Luke Gorrie on LtU:
"Oh my, what if Luke installed an exception handler that ROT13 encoded every string on the heap, then how would Jane debug her programs?"I hear that early on, people opposed subroutines for similar reasons. And of course, macros are frequently criticised for their potential of wreaking havoc.
But that's where the No Paranoia Rule comes into play. Stop being paranoid, and don't discount language features for their potentially devastating effect.
As Luke further states,
This [being paranoid] is not the way to illumination.(Clemens A. Szyperski coined the term No Paranoia Rule.)
Saturday, August 7, 2010
You know you're reading LtU when...
I believe the preferable solution is typeclasses as record types, instances as records, scoped implicit parameters for propagating them around, associated types as record members, and a backtracking logic solver for instantiation. But that just solves the immediate problem; on top of that I'd like a dependent type system with staging to enable type dependency on values (e.g. arrays-with-length), and a higher-order logic solver so simple logic-programming idioms like let zip(as,bs)=f(x) in ... can be expressed and translated via instantiation into efficient runtime code. And a total (non-partial) function subset of the language for reasoning with proofs as programs. — Tim Sweeney
Monday, August 2, 2010
Three Principles of Lisp
I've been thinking about what it is that still sets Lisp apart from all other dynamic languages. I've come to three core principles, that define what Lisp means for me.
Liberal use of syntactic abstraction
Lispers don't fret over when to build a domain-specific language (DSL) or not. When it makes sense, they build one, when it doesn't, they don't. Thanks to the trivial syntax, and the long experience with tools like pattern matching and quasiquotation, DSLs in Lisp are created rapidly.
In other languages, the creation of new language constructs or new languages requires countless hours of busy work. In Lisp, DSLs can be ready to use after a couple of lines entered at the REPL. In Lisp, creating DSLs is a no-brainer, and I think that's one of the cornerstones of Lisp.
Lisp isn't doctrinaire about creating DSLs - in fact it is standard advice to only use macros when plain functions won't do it anymore. But because creating DSLs in Lisp so easy, they are created whenever they make sense. Which, as it turns out, is a lot of times. I would go so far as to say that the liberal use syntactic abstraction makes Lisp a boilerplate-free language, but that's another topic.
There's no such thing as a constant mentality
Lisp thrives on interactivity. Newer languages like Haskell are encroaching on the natural habitat of Lisp (and surpass it some areas), but none of them can match Lisp when it comes to no holds barred, nitty gritty interactivity. Even defconstant isn't necessarily constant, and for interactive systems (Emacs!) that's exactly what you want. In an interactive system, you can't tolerate constancy - this would be like using Lego blocks that are glued together.
I think a fundamental insight for understanding Lisp's superiority in the interactive domain is that Lisp is an asynchronous language, in a very peculiar sense: any Lisp expression may be entered at any time. In many cases, you could cut up a Lisp source file at top-level expression boundaries, rearrange them, and the resulting source file would still have exactly the same effects as the original file when loaded into a Lisp. (Just as one example, in Common Lisp it's possible to create subclasses of classes that don't exist yet.)
This is simply a natural consequence of making the read-eval-print loop (REPL) the cornerstone of Lisp semantics. In C, the dominant paradigm is main(), in Lisp it's (repl). And the top-level is tricky, some say hopeless. In Lisp, it is fundamental to always be able to define a function FOO that calls an as-yet undefined other top-level function BAR - a fundamental idiom of interactivity:
Lispers don't need to create a new language to try out new language design ideas. Many times, new languages can be defined as macros. If they don't, and an interpreter or compiler is needed, it can still stand on the shoulders of Lisp. In the Lisp world, new languages are built buy combining large, battle-tested building blocks, and polishing or updating them when needed, instead of starting over from toothpicks and double-sided duct tape. A large Lisp like Common Lisp is like a toolchain of decades-old tools that have proven their worth, and have been codified in standards, folklore, and implementations.
Now of course, that's also true of other languages. But in other languages, starting a new language is a from-scratch affair. And on the long way of parsing, analyzing, and interpreting or compiling language constructs, the creator of the new language invariably introduces a lot of things that are different from other languages, so every new non-Lisp language is often fundamentally different from other languages, in subtle areas such as control flow or scoping, which requires decades of fixing.
In Lisp, new tools are tried out as separate functions, macros, DSLs, subsystems, or, in the extreme case, code walkers or complete new implementations. The rest of the language stays the same. Even a new Lisp implementation will often share a lot of the design space with other Lisps. And that's the reason Lisp had proper lexical scoping for decades, while it's just become a fixture in new languages, and that's also why Scheme now has hygienic macros and phase separation, and other languages will have them in decades - because in Lisp, all of these new constructs can be developed in the Lisp fabric, without rebooting the process every time. Which turns out as a nice metaphor:
Others reboot, Lisp keeps running.
Liberal use of syntactic abstraction
Lispers don't fret over when to build a domain-specific language (DSL) or not. When it makes sense, they build one, when it doesn't, they don't. Thanks to the trivial syntax, and the long experience with tools like pattern matching and quasiquotation, DSLs in Lisp are created rapidly.
In other languages, the creation of new language constructs or new languages requires countless hours of busy work. In Lisp, DSLs can be ready to use after a couple of lines entered at the REPL. In Lisp, creating DSLs is a no-brainer, and I think that's one of the cornerstones of Lisp.
Lisp isn't doctrinaire about creating DSLs - in fact it is standard advice to only use macros when plain functions won't do it anymore. But because creating DSLs in Lisp so easy, they are created whenever they make sense. Which, as it turns out, is a lot of times. I would go so far as to say that the liberal use syntactic abstraction makes Lisp a boilerplate-free language, but that's another topic.
There's no such thing as a constant mentality
Lisp thrives on interactivity. Newer languages like Haskell are encroaching on the natural habitat of Lisp (and surpass it some areas), but none of them can match Lisp when it comes to no holds barred, nitty gritty interactivity. Even defconstant isn't necessarily constant, and for interactive systems (Emacs!) that's exactly what you want. In an interactive system, you can't tolerate constancy - this would be like using Lego blocks that are glued together.
I think a fundamental insight for understanding Lisp's superiority in the interactive domain is that Lisp is an asynchronous language, in a very peculiar sense: any Lisp expression may be entered at any time. In many cases, you could cut up a Lisp source file at top-level expression boundaries, rearrange them, and the resulting source file would still have exactly the same effects as the original file when loaded into a Lisp. (Just as one example, in Common Lisp it's possible to create subclasses of classes that don't exist yet.)
This is simply a natural consequence of making the read-eval-print loop (REPL) the cornerstone of Lisp semantics. In C, the dominant paradigm is main(), in Lisp it's (repl). And the top-level is tricky, some say hopeless. In Lisp, it is fundamental to always be able to define a function FOO that calls an as-yet undefined other top-level function BAR - a fundamental idiom of interactivity:
(defun foo ()Toolchain approach
(bar))
Lispers don't need to create a new language to try out new language design ideas. Many times, new languages can be defined as macros. If they don't, and an interpreter or compiler is needed, it can still stand on the shoulders of Lisp. In the Lisp world, new languages are built buy combining large, battle-tested building blocks, and polishing or updating them when needed, instead of starting over from toothpicks and double-sided duct tape. A large Lisp like Common Lisp is like a toolchain of decades-old tools that have proven their worth, and have been codified in standards, folklore, and implementations.
Now of course, that's also true of other languages. But in other languages, starting a new language is a from-scratch affair. And on the long way of parsing, analyzing, and interpreting or compiling language constructs, the creator of the new language invariably introduces a lot of things that are different from other languages, so every new non-Lisp language is often fundamentally different from other languages, in subtle areas such as control flow or scoping, which requires decades of fixing.
In Lisp, new tools are tried out as separate functions, macros, DSLs, subsystems, or, in the extreme case, code walkers or complete new implementations. The rest of the language stays the same. Even a new Lisp implementation will often share a lot of the design space with other Lisps. And that's the reason Lisp had proper lexical scoping for decades, while it's just become a fixture in new languages, and that's also why Scheme now has hygienic macros and phase separation, and other languages will have them in decades - because in Lisp, all of these new constructs can be developed in the Lisp fabric, without rebooting the process every time. Which turns out as a nice metaphor:
Others reboot, Lisp keeps running.