Sunday, July 31, 2011

The meaning of forms in Lisp and Kernel

There are so many nice things to be learned from Lisp - and Kernel.

Take this:

A Lisp REPL reads forms input by the user.

But Lisp is never content with a form alone: it always wants to evaluate the form, determine its meaning.

For instance, when you input a symbol form, Lisp will not just return the symbol object - it will return the value of the variable binding referenced by the symbol - which is the symbol's meaning, per Lisp's rules of form evaluation.

No wonder then that one of the central features of Lisp is quotation - telling Lisp not to determine the meaning of a form presented to it, but rather give us the form itself.

So what's a form? A form is "any object meant to be evaluated", in other words, we may treat any object as a form, as something to evaluate, as something to determine the meaning of.

Lisp's evaluation rules do not prescribe in any way that forms are bound to text-based representations - a form is any first-class object meant to be evaluated. (And in fact, Racket, a leading Lisp implementation already supports images as forms.)

So Lisp is chiefly concerned with determining the meaning of forms read from the user.

Macros then allow us to extend the variety of forms recognized by a Lisp, beyond the ways that mere function calls can.

Where in a function call the meaning of every operand is established automatically before the function is actually called, macros have total freedom in evaluating - determining the meaning of - operands, or not evaluating them at all (as a short-circuiting AND operator may do, for example).

But Common Lisp and Scheme macros, by their preprocessing nature (a macro call works at a time conceptually different from runtime, translating source expressions to other expressions, to be evaluated at a later time), cannot be used in a first-class fashion. For example, it is not possible to apply the AND macro to a list of dynamically computed values (without resorting to EVAL), like we could with a function.

In a sense, macros of the preprocessing kind restrict our ability of determining the meaning of forms: With preprocessing macros, we may only determine the meaning of forms available at preprocessing time.

This where Kernel's fexprs come in. Fexprs drop the requirement of macros to be expandable in a separate preprocessing step. Thus, we may apply the AND fexpr to a list of dynamically computed operands, for example.

But fexprs go beyond being merely a more convenient replacement for macros. As John Shutt shows, a new lexically-scoped formulation of fexprs - as presented in Kernel - is even more fundamental to Lisp than lambda. In Kernel, lambda is defined in terms of vau, the ultimate abstraction:
($define! $lambda
($vau (formals . body) env
(wrap (eval (list* $vau formals #ignore body)
env))))
Through the power of fexprs, Kernel needs only three built-in operators (define, vau, and if) as well as some built-in functions. This is vastly simpler than any definition of Scheme! Scheme built-ins such as lambda, set! (!), define-syntax and others fall directly out of fexprs, for free, radically simplifying the implementation. But I'm rambling.

In essence, fexprs liberate Lisp's ability to determine the meaning of forms.

It's getting midnight (on a Sunday!), so I can't do fexprs or my excitement about them justice. But I hope this post has made you think about forms and their meanings, and to check out Kernel and share the excitement about the brave new Lisp world of fexprs.

The War Between Developers, Designers, & Project Managers




Previously:


Some history

Reading the interesting Portable and high-level access to the stack with Continuation Marks, I found the following quote about procedure calls (my emphasis):
This [not properly tail-calling] behavior of the SECD machine, and more broadly of many compilers and evaluators, is principally a consequence of the early evolution of computers and computer languages; procedure calls, and particularly recursive procedure calls, were added to many computer languages long after constructs like branches, jumps, and assignment. They were thought to be expensive, inefficient, and esoteric.

Guy Steele outlined this history and disrupted the canard of expensive procedure calls, showing how inefficient procedure call mechanisms could be replaced with simple “JUMP” instructions, making the space complexity of recursive procedures equivalent to that of any other looping construct.
Then it gets highly ironic, with a quote by Alan Kay:
Though OOP came from many motivations, two were central. ... [T]he small scale one was to find a more flexible version of assignment, and then to try to eliminate it altogether.

Saturday, July 30, 2011

Some nice paperz on delimited continuations and first-class macros

My current obsessions are delimited continuations (see my intro post) and first-class macros (fexprs). Here are some nice papers related to these topics:

Three Implementation Models for Scheme is the dissertation of Kent Dybvig (of Chez Scheme fame). It contains a very nice abstract machine for (learning about) implementing a Scheme with first-class continuations. I'm currently trying to grok and implement this model, and then extend it to delimited continuations.

A Monadic Framework for Delimited Continuations contains probably the most succinct description of delimited continuations (on page 3), along with a typically hair-raising example of their use.

Adding Delimited and Composable Control to a Production Programming Environment. One of the authors is Matthew Flatt, so you know what to expect - a tour de force. The paper is about how Racket implements delimited control and integrates it with all the other features of Racket (dynamic-wind, exceptions, ...). Apropos, compared to Racket, Common Lisp is a lightweight language.

Subcontinuations. This is an early paper on delimited continuations. It also describes control filters, a low level facility on top of which dynamic-wind can be implemented. Control filters are also mentioned - in passing - as a nice tool in the Racket paper (above).

Fexprs as the basis of Lisp function application or $vau: the ultimate abstraction and the Revised-1 Report on the Kernel Programming Language. These two are probably the two papers I would take to the desert island at the moment. I'm only a long-time apprentice of programming languages, but I know genius when I see it.

If you know of any related nice papers, let me know!


On Twitter, Paul Snively mentioned Oleg's paper Delimited Control in OCaml, Abstractly and Concretely, but probably only because he's mentioned in the Acknowlegdements - kidding! It's a great paper, and I'm studying it too, but unfortunately it's not really applicable to implementing delimited control in JavaScript, which is what I'm trying to do.

Thursday, July 28, 2011

Thursday, July 21, 2011

On the absence of EVAL in ClojureScript

[I got bored with the original text of this post. In short: the authors of ClojureScript have taken the pragmatic approach of reusing Clojure-on-the-JVM to generate JavaScript. This means there can be no EVAL in ClojureScript. I think that's sad because Lisp is one of the finest application extension languages, and without EVAL you can't do a really extensible app. Apparently, the ClojureScript authors view the browser-side as something of an adjunct to server apps. In my view, all the action of the future will be in the browser, and not on the server.]

Thursday, July 14, 2011

Frankenprogramming

David Barbour throws down the gauntlet to John Shutt in an already interesting and entertaining ("about as coherent as a 90 day weather forecast") LtU thread:
John Shutt would like to break the chains of semantics, e.g. using fexprs to reach under-the-hood to wrangle and mutate the vital organs of a previously meaningful subprogram. Such a technique is workable, but I believe it leads to monolithic, tightly coupled applications that are not harmonious with nature. I have just now found a portmanteau that properly conveys my opinion of the subject: frankenprogramming.
I'm looking forward to John's reply. In the meantime, I have to say that I like to view this in a more relaxed way. As Ehud Lamm said:
Strict abstraction boundaries are too limiting in practice. The good news is that one man's abstraction breaking is another's language feature.
Previously, regarding fexprs: John Shutt's blog

Sunday, July 10, 2011

Atwood's Law

Any application that can be written in JavaScript, will eventually be written in JavaScript.

Friday, July 8, 2011

it is nice to have the power of all that useful but obsolete software out there

Subject: Re: Switching tasks and context
∂08-Jul-81 0232 Nowicki at PARC-MAXC Re: Switching tasks and context
Date: 6 Jul 1981 10:32 PDT
From: Nowicki at PARC-MAXC
In-reply-to: JWALKER's message of Wednesday, 1 July 1981 11:52-EDT
To: JWALKER at BBNA
cc: WorkS at AI

[...]
The important thing is that such a tool WORKS, and I find incredibly useful. The virtual terminal uses ANSI standard escape sequences, so we can use all the tools that have been around for a long time (like the SAIL Display Service, Emacs under TOPS-20 or Unix, etc.) with NO modifications. We are planning to use this to develop a more integrated system, but it is nice to have the power of all that useful but obsolete software out there.

(via Chris, who's scouring Unix history for us)