I now have a
bit of experience with
Kernel-style fexprs (short for fun expressions,
not frankenprogramming expressions), and I must say I am amazed.
While it's true that one can obviously express fexprs easily in plain Scheme/LC through a little bit of encoding - an objection immediately raised by a friend - the converse is also true, as
John Shutt explains. In a language with vau (the
WOW! combinator), lambda is simply expressed as a fexpr that evaluates the operand tree, yielding a list of arguments, before entering the body.
So it seems that neither vau nor lambda is preferable from a theoretical standpoint.
But when it comes to practical matters, IMO vau clearly wins over lambda, making it only the penultimate. Why? Take the (usually smug) Haskell weeniesprogrammers. They like to wax about how lazyness obviates the need for macros, but then they spend all day writing papers about parallelizing the Fibonacci Butt Sort using LaTeX, which is totally based on macros. (You can read more about the Fibonacci Butt Sort here on 4chan /prog/, the second best PLT site.) Not to speak of the use of cpp in the Haskell sources. CPP, FFS!!
So while macros are shunned for theoretical reasons of pureness, they pop up everywhere somebody writes a real program. That's simply a fact, and if you disagree please go back to writing your LaTeX paper about your cpp-processed Haskell program.
Now that we've established that macros are a necessity, we obviously want the best semantics for our macros. Common macros are preprocessors: they take in expressions and spit out expressions. Conceptually, and often also actually, macros are run in a separate phase from runtime, at compile-time. (See my cleverly titled post
What's phase separation and when do you want it? for more information on phases.) This has two drawbacks. First, because macros are compile-time entities they don't actually exist as first-class entities at runtime. But wait, that's the smaller drawback. The second, REAL drawback is that because macros run outside the real program, processing its representation
as an expression, they have no friggin' clue whatsoever about the structure of the program. This is the root of hygiene problems: what a hygienic macro system does is
fake lexical scope at compile-time. This is necessary because at compile-time, the wonderful machinery of lexical scope (that it took us from 1959 to 1975 to discover, and is still rediscovered by
peasantsignorami every day) doesn't exist. And this faking produces a huge cost in complexity. One example is that it necessitates a
partial expansion step (
R6RS,
Racket) to find all definitions in a piece of code. This greatly adds complexity to a language. A second example is that if you want
clean compilability, you end up with things like
negative metalevels. All signs of Ptolemy's epicycles, as
Tom Lord will tell you.
Hm, I went off on a tangent there. Where was I? Yes, semantics. We want macros, and we want good semantics for them. And that's what fexprs give us. It's getting late, and I have some Stoli to drink. More next time, Nostrovia!