Showing posts with label kernel. Show all posts
Showing posts with label kernel. Show all posts

Tuesday, August 7, 2012

Scheme's missing ingredient: first-class type tags

I've always felt that Scheme does data and types wrong, but I couldn't really say what bothered me until yesterday. After all, R7RS now has define-record-type for creating disjoint types. But that ain't enuff!

In my new language, Wat, I've found a satisfying solution, inspired by Kernel's encapsulation types, but going beyond them slightly. There are two procedures:
  • (make-type) returns a list of three elements: 1) a first-class type 2) a tagger function for tagging a value with that type, creating a tagged object and 3) an untagger function for extracting the value of tagged objects of that type.
  • (type-of obj) returns the first-class type of an object. The crucial point is that type-of not only works for tagged objects of user-defined types created by make-type, but also for all built-in types. E.g. (type-of 12) will return the number first-class type.
This system has the benefits of Kernel's encapsulated types: only someone with access to the tagger function (capability) may create new instances of a type. Only someone with access to the untagger function may access the contents of tagged objects. So object contents are potentially fully encapsulated.

But at the same time, the fact that every object, including built-in ones, has a first-class type makes it possible to efficiently program generically. E.g. one may create Smalltalk-like dynamic dispatch by using the first-class types of objects as indexes into a virtual lookup table of a generic function. This is not possible in either Scheme or Kernel. In both languages, programming generically requires one to use a cascade of type predicates (e.g. number?).

Example in Wat:
; destructuringly bind the three elements
; returned by make-type
(def (person-type person-tagger person-untagger) (make-type))

(define (make-person name email)
  (person-tagger (list name email)))

; untagger also performs a type check that
; person is in fact of type person
(define (get-name person)
  (car (person-untagger person)))

(define (get-email person)
  (cadr (person-untagger person)))

(define p1 (make-person "Quux" "quux@example.com"))
(get-name p1) --> "Quux"
(get-email p1) --> "quux@example.com"

Friday, May 18, 2012

Manuel, have you lost interest in Kernel?

... I was asked yesterday two times on #interactiveprogramming.

No, absolutely not!

I simply got stopped in the tracks trying to find a satisfactory way of adding named arguments to it. I need that feature to consider it complete.

However, two new old topics will probably be raging in my skull in the future:
  • Static types, because defining red-black trees that are balanced by construction makes so much sense (and it may finally help me understand those pesky trees).
  • Interactive execution, because Roly Perera.

Tuesday, March 27, 2012

Kernel Google Group

There's now a klisp Google Group for discussion of the klisp implementation of Kernel, and of Kernel in general. Check it out, there are some interesting posts already.

Also, there are now at least 12 implementations of Kernel and related languages.

Wednesday, March 21, 2012

"should I really take the time to learn fexprs?"

You should take the time to learn fexprs if you answer yes to any of these questions:
  • You like Lisp, and want to learn even more about its rhyming scheme. Subjectively, a Lisp with fexprs feels even more Lisp than Lisp.
  • You are dissatisfied with either the need for GENSYM, or for a hygienic macro system's complexity. Fexprs require neither.
  • You think that everything in a PL should be first-class.
  • You'd like to see how a single concept can replace both functions and hygienic macros.
  • You want to implement a PL with as little code as possible (since fexprs give you the power of both functions and macros, you don't need to implement a macro system to bootstrap your PL.)
  • You like to get your mind blown.
  • You like hygienic macros, and want to understand better how they work. Even if you don't use fexprs, understanding them can show you how hygienic macros are really a special case formulation of fexprs.
  • You think that PLs should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary.

Tuesday, March 20, 2012

"I want to understand vau"

Apparently, youse people don't know the R-1RK by heart yet.

Vau is just like lambda, except it doesn't evaluate its arguments, and receives the lexical environment in which it is called.

Vau's syntax is (vau operands environment . body).

Lambda's syntax is (lambda arguments . body).

((vau x e x) foo) ==> (foo)
((vau x e e) foo) ==> #[environment]

You can also destructure the operands:

((vau (x y z) e z) foo bar baz) ==> baz

And if you want to get at the actual value of an operand, you use eval, which takes a form and an environment as parameters:

(def msg "Hello world!")
((vau (x) e (eval x e)) msg) ==> "Hello world!"

Note: you can try out these expressions at the Virtua REPL.

Monday, March 19, 2012

Why fexprs, part N, or lambda only the penultimate

I now have a bit of experience with Kernel-style fexprs (short for fun expressions, not frankenprogramming expressions), and I must say I am amazed.

While it's true that one can obviously express fexprs easily in plain Scheme/LC through a little bit of encoding - an objection immediately raised by a friend - the converse is also true, as John Shutt explains. In a language with vau (the WOW! combinator), lambda is simply expressed as a fexpr that evaluates the operand tree, yielding a list of arguments, before entering the body. So it seems that neither vau nor lambda is preferable from a theoretical standpoint.

But when it comes to practical matters, IMO vau clearly wins over lambda, making it only the penultimate. Why? Take the (usually smug) Haskell weeniesprogrammers. They like to wax about how lazyness obviates the need for macros, but then they spend all day writing papers about parallelizing the Fibonacci Butt Sort using LaTeX, which is totally based on macros. (You can read more about the Fibonacci Butt Sort here on 4chan /prog/, the second best PLT site.) Not to speak of the use of cpp in the Haskell sources. CPP, FFS!!

So while macros are shunned for theoretical reasons of pureness, they pop up everywhere somebody writes a real program. That's simply a fact, and if you disagree please go back to writing your LaTeX paper about your cpp-processed Haskell program.

Now that we've established that macros are a necessity, we obviously want the best semantics for our macros. Common macros are preprocessors: they take in expressions and spit out expressions. Conceptually, and often also actually, macros are run in a separate phase from runtime, at compile-time. (See my cleverly titled post What's phase separation and when do you want it? for more information on phases.) This has two drawbacks. First, because macros are compile-time entities they don't actually exist as first-class entities at runtime. But wait, that's the smaller drawback. The second, REAL drawback is that because macros run outside the real program, processing its representation as an expression, they have no friggin' clue whatsoever about the structure of the program. This is the root of hygiene problems: what a hygienic macro system does is fake lexical scope at compile-time. This is necessary because at compile-time, the wonderful machinery of lexical scope (that it took us from 1959 to 1975 to discover, and is still rediscovered by peasantsignorami every day) doesn't exist. And this faking produces a huge cost in complexity. One example is that it necessitates a partial expansion step (R6RS, Racket) to find all definitions in a piece of code. This greatly adds complexity to a language. A second example is that if you want clean compilability, you end up with things like negative metalevels. All signs of Ptolemy's epicycles, as Tom Lord will tell you.

Hm, I went off on a tangent there. Where was I? Yes, semantics. We want macros, and we want good semantics for them. And that's what fexprs give us. It's getting late, and I have some Stoli to drink. More next time, Nostrovia!

Tuesday, February 14, 2012

Yet another Kernel interpreter

I wrote another Kernel-ish interpreter in JavaScript: Virtua.

This time, no tail-call optimization or first-class continuations, just plain old pedestrian control flow: LOOP, CATCH, THROW, UNWIND-PROTECT (unexpectedly, this didn't make the interpreter much smaller than the one with TCO and continuations.)

Probably the only interesting thing in there is that everything is fully OOP: EVAL is a message, as is COMBINE, as is MATCH - which means every object can potentially control its own evaluation, act as a combiner, and be used as a left-hand-side pattern in matching.

Again: Kernel is nice and elegant, but the interpretetive overhead of fexprs is killer [Edit: haven't measured it, and see this]. Maybe it doesn't matter for what I want to do: script web GUIs.

Friday, January 20, 2012

Fun with Kernel

Smug Kernel Weenies - known to be even smugger than Smug Lisp Weenies - are advised to check out Dale Schumacher's series on Kernel:
Besides vau (known as the Wow Combinator by fans, Frankenexpressions by critics), that unifies functions and (hygienic) macros in a single concept (with some open questions), one of my favorite Kernel features are first-class environments. They're just cute.

First-class environments have been shown to exhibit nice properties (e.g. you can write a module system in userland), and solve some of Scheme's problems (such as the letrec black hole), by Queinnec in the somewhat difficult to read Sharing Code Through First-class Environments. I think all designers of new dynamic languages should seriously consider them.

Tuesday, November 8, 2011

Open, extensible composition models


Did VPRI just enter the FEXPR game?
The evaluator corrects several semantic inadequacies of LISP (described by Stoyan [5]) and is metacircular (written in the language it evaluates). The language provides:
  • lists and atomic values, including symbols
  • primitive functions called SUBRs
  • symbolic functions (closures) called EXPRs
  • a FIXED object that encapsulates another applicable value and prevents argument evaluation
  • predicates to discriminate between the above types
  • built-in SUBRs to access the contents of these values
  • a way to call the primitive behaviour of a SUBR
  • a quotation mechanism to prevent evaluation of literals
(my emphasis)
More later. HT @CraigOverend.

Thursday, September 8, 2011

The Kernel Underground

A quick update on Kernel buzz:

There are now fourfivesixseven19 (I stopped counting) implementations of Kernel and Kernel-like languages:
Check them out for inspiration, and write your own. It's fun. Fexprs are fun exprs!

There are some interesting Kernel discussions going on in the Guile thread (and here) on LtU. David Barbour provides welcome criticism. (The truth can only be found in conflict. - Tarkovksy)

Oh, and I created this:

Thursday, September 1, 2011

On macros

This post by David Barbour is one of the most insightful statements on macros I have read (along with Vladimir Sedach's analysis):
It has been my impression that, no matter what specific thing you use macros to do, you can later invent a neat language feature that handles the problem in a clean and nice manner. The problem is the "later". Macros can do it "now"... in an ugly, brutish manner, true, but the job gets done. ...
Even among people who find macros smelly, you'll find many who think macros the least distasteful of these choices.
John Shutt's response ties this in with fexprs - instead of delegating the job to a separate preprocessing step, let the language itself have the power of self-extension:
Macros are an unpleasant feature to compensate, somewhat, for limitations of a language. It's better to compensate for the limitations than to leave the limitations in place without compensating at all. Better still, though, would be to eliminate the offending limitations. (Remove the weaknesses and restrictions that make additional features appear necessary.) Make no mistake, fexprs don't necessarily increase the uniform flexibility of the language; but with care in the design they can do so, and thereby, the jobs otherwise done by macros are brought within the purview of the elegant language core. So instead of an ugly brutish fix with macros now and hope for a more elegant replacement later, one can produce a clean solution now with fexprs.
PS: This point is subtle. After all Scheme is already self-extensible. But Scheme macros still are an additional "layer" on top of, and separate from, the core language. Kernel's fexprs are not - they are the core language, thus bringing self-extension within the purview of the elegant language core.

Saturday, August 20, 2011

Praising Kernel

I love Kernel! Let's face it: While Paul Graham has reanimated Lisp, Lisp has become stuck in a rut. CL hasn't changed since '94, and it probably won't, and heck, that's even a good thing - CL is the mud ball of strength, the acting patriarch of the Lisp family. There's a better chance of CL being around in 50 years than most other programming languages. The same for Scheme, although it's in a bit of an identity crisis right now. But what's crazy about Scheme is that it has become focused on ahead-of-time compilation, in a world where people run Linux in JavaScript JIT compilers. One of the purposes of Scheme should be to push compiler writers to be smarter. Let's forget all that static, performance-oriented stuff, let's make it as dynamic and general and simple as it can get, and spend the following decades on amazing new tech to make it fast! That's what Scheme's about, right? And let's not talk about Clojure, they don't even "see the need" for EVAL [in ClojureScript]. So much for the state of Lisp. Can you see it? We need to put the fun, nasty fun, back into Lisp. Lisp has been resting on its well-deserved laurels for much too long. Heck, even Java is getting lambdas now, and there's talk of macros in JavaScript. Lisp's purpose in the programming language galaxy is to assist our most gifted fellow humans in thinking previously impossible thoughts, remember? Do you really think current Lisps are up to that task?

So where does Kernel fit in? In short, I think Kernel is as close as you can get to the essence, the rhyming scheme of Lisp. Whereas the R6RS doesn't even mention interactive development, Kernel is squarely interactive. And it provides crystal clear semantics to go with it. LETREC blackhole? Hopeless toplevel? Not in Kernel. First-class environments solve these issues. Look, Kernel has a single form, $vau, that is as powerful as lambda, define-syntax, and let-syntax combined! (Strike that, it's even more powerful - you can use Kernel's macros, fexprs, just like ordinary functions.) Need I say more? Kernel can truly bootstrap itself from its 3 (or less) built-in forms (counting them is not easy), without the need for a separate expansion process, and without the need for a hygiene system - Kernel is simply inherently hygienic (when used right). Another form, $define!, not only takes on the roles of define and set!, but also doubles, nay, triples, nay, n-tuples, as a primitive for arbitrary namespace manipulation - you can build any module system you like on top of this single primitive! Think about it? Can your language do that? Fat chance! But should your Lisp be able to do that? I'd say. Now, you say, but no one has written real applications in Kernel yet. I say: that's what's so cool about it! It's a brave new world, without trodden paths, full of new discoveries in semantics, implementation technologies, and other things we can't even see yet. In the past 50 years, Lisp has succeeded in bringing object-orientation, dynamism, functional programming, and other niceties to the philistines. Soon, they'll even have macros and EVAL. Now it's time for Lisp to move another 50 years forward. That's what Lisp is for! And as far as I can see, Kernel is the vehicle for that. So fire up your printers, print out John Shutt's amazing works Fexprs as the basis of Lisp function application; or, $vau: the ultimate abstraction and the Revised -1 Report on the Kernel Programming Language, bind them, study them, and start hacking on Lisp's future! It's easy (nah, OK, it's hard) and it starts with you!

Sunday, July 31, 2011

The meaning of forms in Lisp and Kernel

There are so many nice things to be learned from Lisp - and Kernel.

Take this:

A Lisp REPL reads forms input by the user.

But Lisp is never content with a form alone: it always wants to evaluate the form, determine its meaning.

For instance, when you input a symbol form, Lisp will not just return the symbol object - it will return the value of the variable binding referenced by the symbol - which is the symbol's meaning, per Lisp's rules of form evaluation.

No wonder then that one of the central features of Lisp is quotation - telling Lisp not to determine the meaning of a form presented to it, but rather give us the form itself.

So what's a form? A form is "any object meant to be evaluated", in other words, we may treat any object as a form, as something to evaluate, as something to determine the meaning of.

Lisp's evaluation rules do not prescribe in any way that forms are bound to text-based representations - a form is any first-class object meant to be evaluated. (And in fact, Racket, a leading Lisp implementation already supports images as forms.)

So Lisp is chiefly concerned with determining the meaning of forms read from the user.

Macros then allow us to extend the variety of forms recognized by a Lisp, beyond the ways that mere function calls can.

Where in a function call the meaning of every operand is established automatically before the function is actually called, macros have total freedom in evaluating - determining the meaning of - operands, or not evaluating them at all (as a short-circuiting AND operator may do, for example).

But Common Lisp and Scheme macros, by their preprocessing nature (a macro call works at a time conceptually different from runtime, translating source expressions to other expressions, to be evaluated at a later time), cannot be used in a first-class fashion. For example, it is not possible to apply the AND macro to a list of dynamically computed values (without resorting to EVAL), like we could with a function.

In a sense, macros of the preprocessing kind restrict our ability of determining the meaning of forms: With preprocessing macros, we may only determine the meaning of forms available at preprocessing time.

This where Kernel's fexprs come in. Fexprs drop the requirement of macros to be expandable in a separate preprocessing step. Thus, we may apply the AND fexpr to a list of dynamically computed operands, for example.

But fexprs go beyond being merely a more convenient replacement for macros. As John Shutt shows, a new lexically-scoped formulation of fexprs - as presented in Kernel - is even more fundamental to Lisp than lambda. In Kernel, lambda is defined in terms of vau, the ultimate abstraction:
($define! $lambda
($vau (formals . body) env
(wrap (eval (list* $vau formals #ignore body)
env))))
Through the power of fexprs, Kernel needs only three built-in operators (define, vau, and if) as well as some built-in functions. This is vastly simpler than any definition of Scheme! Scheme built-ins such as lambda, set! (!), define-syntax and others fall directly out of fexprs, for free, radically simplifying the implementation. But I'm rambling.

In essence, fexprs liberate Lisp's ability to determine the meaning of forms.

It's getting midnight (on a Sunday!), so I can't do fexprs or my excitement about them justice. But I hope this post has made you think about forms and their meanings, and to check out Kernel and share the excitement about the brave new Lisp world of fexprs.

Saturday, July 30, 2011

Some nice paperz on delimited continuations and first-class macros

My current obsessions are delimited continuations (see my intro post) and first-class macros (fexprs). Here are some nice papers related to these topics:

Three Implementation Models for Scheme is the dissertation of Kent Dybvig (of Chez Scheme fame). It contains a very nice abstract machine for (learning about) implementing a Scheme with first-class continuations. I'm currently trying to grok and implement this model, and then extend it to delimited continuations.

A Monadic Framework for Delimited Continuations contains probably the most succinct description of delimited continuations (on page 3), along with a typically hair-raising example of their use.

Adding Delimited and Composable Control to a Production Programming Environment. One of the authors is Matthew Flatt, so you know what to expect - a tour de force. The paper is about how Racket implements delimited control and integrates it with all the other features of Racket (dynamic-wind, exceptions, ...). Apropos, compared to Racket, Common Lisp is a lightweight language.

Subcontinuations. This is an early paper on delimited continuations. It also describes control filters, a low level facility on top of which dynamic-wind can be implemented. Control filters are also mentioned - in passing - as a nice tool in the Racket paper (above).

Fexprs as the basis of Lisp function application or $vau: the ultimate abstraction and the Revised-1 Report on the Kernel Programming Language. These two are probably the two papers I would take to the desert island at the moment. I'm only a long-time apprentice of programming languages, but I know genius when I see it.

If you know of any related nice papers, let me know!


On Twitter, Paul Snively mentioned Oleg's paper Delimited Control in OCaml, Abstractly and Concretely, but probably only because he's mentioned in the Acknowlegdements - kidding! It's a great paper, and I'm studying it too, but unfortunately it's not really applicable to implementing delimited control in JavaScript, which is what I'm trying to do.

Wednesday, June 22, 2011

John Shutt's blog

John Shutt now has a blog: Structural insight.

John is pushing the envelope of Lisp with his Kernel programming language, a "dialect of Lisp in which everything is a first-class object". And when he says everything, he means it.

Not content with just first-class functions and continuations, John also wants first-class environments and first-class macros. Some fun LtU discussions in which John participated come to mind: