Monday, March 19, 2012

Why fexprs, part N, or lambda only the penultimate

I now have a bit of experience with Kernel-style fexprs (short for fun expressions, not frankenprogramming expressions), and I must say I am amazed.

While it's true that one can obviously express fexprs easily in plain Scheme/LC through a little bit of encoding - an objection immediately raised by a friend - the converse is also true, as John Shutt explains. In a language with vau (the WOW! combinator), lambda is simply expressed as a fexpr that evaluates the operand tree, yielding a list of arguments, before entering the body. So it seems that neither vau nor lambda is preferable from a theoretical standpoint.

But when it comes to practical matters, IMO vau clearly wins over lambda, making it only the penultimate. Why? Take the (usually smug) Haskell weeniesprogrammers. They like to wax about how lazyness obviates the need for macros, but then they spend all day writing papers about parallelizing the Fibonacci Butt Sort using LaTeX, which is totally based on macros. (You can read more about the Fibonacci Butt Sort here on 4chan /prog/, the second best PLT site.) Not to speak of the use of cpp in the Haskell sources. CPP, FFS!!

So while macros are shunned for theoretical reasons of pureness, they pop up everywhere somebody writes a real program. That's simply a fact, and if you disagree please go back to writing your LaTeX paper about your cpp-processed Haskell program.

Now that we've established that macros are a necessity, we obviously want the best semantics for our macros. Common macros are preprocessors: they take in expressions and spit out expressions. Conceptually, and often also actually, macros are run in a separate phase from runtime, at compile-time. (See my cleverly titled post What's phase separation and when do you want it? for more information on phases.) This has two drawbacks. First, because macros are compile-time entities they don't actually exist as first-class entities at runtime. But wait, that's the smaller drawback. The second, REAL drawback is that because macros run outside the real program, processing its representation as an expression, they have no friggin' clue whatsoever about the structure of the program. This is the root of hygiene problems: what a hygienic macro system does is fake lexical scope at compile-time. This is necessary because at compile-time, the wonderful machinery of lexical scope (that it took us from 1959 to 1975 to discover, and is still rediscovered by peasantsignorami every day) doesn't exist. And this faking produces a huge cost in complexity. One example is that it necessitates a partial expansion step (R6RS, Racket) to find all definitions in a piece of code. This greatly adds complexity to a language. A second example is that if you want clean compilability, you end up with things like negative metalevels. All signs of Ptolemy's epicycles, as Tom Lord will tell you.

Hm, I went off on a tangent there. Where was I? Yes, semantics. We want macros, and we want good semantics for them. And that's what fexprs give us. It's getting late, and I have some Stoli to drink. More next time, Nostrovia!

7 comments:

  1. We Haskellers do have Template Haskell... it's a bear to use, but we're not necessarily philosophically against macros. Yesod uses it pervasively to ensure type safety, for instance.

    ReplyDelete
  2. I've had in mind for many years to redesign TeX from the ground up based on fexprs. (In my copious free time, of course, which I seem to have mislaid somewhere.)

    ReplyDelete
  3. On consideration, I'd amend somewhat your statement that neither is preferable from a theoretical standpoint. Neither is preferable from a computational (i.e., computation theory) standpoint. For equational theory, I'd say fexprs win hands down, because the other approach entails quotation, and quotation is at the heart of the "theory of fexprs is trivial" phenomenon. Basing the equational theory on fexprs can avoid trivialization. Granted, you can use a fexpr-based nontrivial equational theory to describe a macro-based language; but that's gratuitous extra work, like fake-lexical-scope-at-compile-time.

    ReplyDelete
  4. I don't these fancy expressions (fexprs), only sexprs.

    Does the critics formulated athttp://www.nhplace.com/kent/Papers/Special-Forms.html are sensible?

    Read the question as: should I really take time to learn them?

    ReplyDelete
  5. "we obviously want the best semantics for our macros"

    NOT so obvious is where "best semantics for our macros" should weigh in among our other language design desiderata - modularity, extensibility, performance and real-time, safety, security, and so on.

    ReplyDelete
  6. I wonder, I wonder...

    John's vau combinator was achieved by decoupling operands and evaluation.

    In my own designs, I decouple staged computation from the user-defined syntax system.

    Doing so seems to offer similar benefits to abstractive and expressive power without sacrificing any security, safety, or modularity properties. But my defaults are different - with fexprs, developers must be explicit to hide structure; with my approach, developers must be explicit to expose structure.

    "what a hygienic macro system does is fake lexical scope at compile-time."

    I note that hygienic macros are a lot easier if you have a compositional programming model, i.e. where parameters only exist as syntactic sugar for staging.

    "We want macros, and we want good semantics for them."

    Don't confuse a means with an end! What you want is control over phase, expressive power, flexible abstraction.

    ReplyDelete

Real names (or handles), please. Anonymous comments are likely to be ignored.