Take this:
But Lisp is never content with a form alone: it always wants to evaluate the form, determine its meaning.
For instance, when you input a symbol form, Lisp will not just return the symbol object - it will return the value of the variable binding referenced by the symbol - which is the symbol's meaning, per Lisp's rules of form evaluation.
No wonder then that one of the central features of Lisp is quotation - telling Lisp not to determine the meaning of a form presented to it, but rather give us the form itself.
So what's a form? A form is "any object meant to be evaluated", in other words, we may treat any object as a form, as something to evaluate, as something to determine the meaning of.
Lisp's evaluation rules do not prescribe in any way that forms are bound to text-based representations - a form is any first-class object meant to be evaluated. (And in fact, Racket, a leading Lisp implementation already supports images as forms.)
So Lisp is chiefly concerned with determining the meaning of forms read from the user.
Macros then allow us to extend the variety of forms recognized by a Lisp, beyond the ways that mere function calls can.
Where in a function call the meaning of every operand is established automatically before the function is actually called, macros have total freedom in evaluating - determining the meaning of - operands, or not evaluating them at all (as a short-circuiting AND operator may do, for example).
But Common Lisp and Scheme macros, by their preprocessing nature (a macro call works at a time conceptually different from runtime, translating source expressions to other expressions, to be evaluated at a later time), cannot be used in a first-class fashion. For example, it is not possible to apply the AND macro to a list of dynamically computed values (without resorting to EVAL), like we could with a function.
In a sense, macros of the preprocessing kind restrict our ability of determining the meaning of forms: With preprocessing macros, we may only determine the meaning of forms available at preprocessing time.
This where Kernel's fexprs come in. Fexprs drop the requirement of macros to be expandable in a separate preprocessing step. Thus, we may apply the AND fexpr to a list of dynamically computed operands, for example.
But fexprs go beyond being merely a more convenient replacement for macros. As John Shutt shows, a new lexically-scoped formulation of fexprs - as presented in Kernel - is even more fundamental to Lisp than lambda. In Kernel, lambda is defined in terms of vau, the ultimate abstraction:
($define! $lambda
($vau (formals . body) env
(wrap (eval (list* $vau formals #ignore body)
env))))
Through the power of fexprs, Kernel needs only three built-in operators (define, vau, and if) as well as some built-in functions. This is vastly simpler than any definition of Scheme! Scheme built-ins such as lambda, set! (!), define-syntax and others fall directly out of fexprs, for free, radically simplifying the implementation. But I'm rambling.In essence, fexprs liberate Lisp's ability to determine the meaning of forms.
It's getting midnight (on a Sunday!), so I can't do fexprs or my excitement about them justice. But I hope this post has made you think about forms and their meanings, and to check out Kernel and share the excitement about the brave new Lisp world of fexprs.
"The Ultimate Abstraction!" This was useful, still trying to get a grip on what fexprs enable. I can kinda see it but then it slips away....
ReplyDeleteHi Gavin - same here. They're elusive.
ReplyDeleteI read parts of Shutt's Kernel publications again, and it seems the central issue to it all is, in a sense, hygiene -- in the sense of: how different parts of the program interact, and how that can be controlled and understood. Because that is key to what makes a language good or not: if those interrelations are harder to understand, the language will be not so good to use, and also more difficult to make good compilers/interpreters for.
ReplyDeleteSo the real question for Kernel is: is its fexpr concept good to use? We just have to see . . .
Regarding fexpr use - yes, we'll have to see. I'm currently implementing a Kernel-like language to find out.
ReplyDeleteRegarding compilers - fexprs and 1st-class environments definitely mess up AOT compilation completely. (You can't even do basic tricks like precompute de Bruijn indices for lexical variable lookups AOT.)
But fexprs and 1st-class environments really are what you want from a cleanliness of design perspective. In Kernel, letrec may be faithfully defined as a macro - something that's not possible in Scheme. Furthermore, 1st-class envs do give a clean account of the top-level environment - Scheme has to use quite complicated rules to make it all work (see here).