Friday, January 28, 2011
Wednesday, January 26, 2011
Proglangs are machines, not media
I've blogged about this before, but I think it's worth stressing again:
Programming languages should not be viewed as media, but as machines.
I first had this intuition when learning Common Lisp: CL is like a big program (i.e. an object, a machine), that's configured and used through Lisp code.
This has the nice side-effect of vindicating Lisp's built-in uglyness: who cares what a machine looks like, as long as it's the best for the job?
(Thanks to Harrison Ainsworth for the initial inspiration.)
[Edit: Ah well, I guess I pushed the publish button too quick. Of course, PLs are media. But that's obvious. I think it's important to also look at their object, machine aspects.]
Sunday, January 23, 2011
Maniacal, religious dedication to code beauty
It's hard to keep source code "beautiful".
I think the only way it can work is to be dedicated like Zen monks cleaning their monastery - even one speck of dreck is too much.
The best example I know is Jonathan Bachrach's GOO (which also has a lot of other qualities). The code in there is phenomenally clean and consistent, and looks funny and artful to boot.
I think the only way it can work is to be dedicated like Zen monks cleaning their monastery - even one speck of dreck is too much.
The best example I know is Jonathan Bachrach's GOO (which also has a lot of other qualities). The code in there is phenomenally clean and consistent, and looks funny and artful to boot.
Thursday, January 20, 2011
The dark, dark, dark art of unwinding the stack
Ian Lance Taylor, the author of the gold linker and the GCC version of the Go language, has written a nice mini-series on how to unwind the stack:
This is wizard-level material, and it probably helps to have the ELF and DWARF specs handy.
This is wizard-level material, and it probably helps to have the ELF and DWARF specs handy.
Sunday, January 9, 2011
Thursday, January 6, 2011
Curing PL Anxiety
Some years ago I was in a state that I now discover in many other people on the web, and which I term "PL anxiety". It is characterized by a constant insecurity about which PL is the best, which one to learn, how "fast" each one is, whether PG is really right, etc etc.
In retrospective, my way out of this was a four-pronged approach:
(1) The Breadwinner PL
You should know one language that's reasonably popular where you live, and which will always land you a job. In the past that meant C++, C, Java, C#, and maybe Perl and PHP, but these days you can probably also get away with Python, Ruby, JavaScript, or Scala.
What's important about the breadwinner PL is that you know its semantics and its standard library by heart.
(2) The C PL
Infinitely many good things come to you by learning C, because your OS is written in it. Studying C, you'll learn awesome stuff, like how to handle SIGSEGV, exploit your branch predictor, write a language runtime, and what a linker is, for example.
The more you learn about C, the more you'll learn about a wide array of services that are already provided to you by your OS and compiler. And C is fun and simple to boot, and will give you the warm fuzzy feeling that comes from building things (almost) from the ground up.
(3) The Romantic PLs
These are languages that you really like and if you're really lucky, might one day get paid for using. You don't have to make a choice - learn them all. They all have their pros and cons, and in the end you'll have to roll your own, anyhow.
(4) The Other Interesting PLs Out There
Also keep an eye on olden golden PLs and newcomers to the scene, even if you're never going to use them: they may blow your mind, and that's what you really should be looking for. Ωmega is a good example.
The Real World
I've come to start new non-hobby projects in my breadwinner language, because it's the most convenient, I know it by heart, all my problems have already been encountered and hopefully solved by somebody else, there are tons of libraries, and PLs seem to matter very little for many projects (and if they matter, you can always Greenspun it).
Motivation
The thing that keeps me motivated learning more about PLs are PLs themselves. I learned C writing Lisp->C compilers, for example. I'm learning about dependent types because I'd like to implement a PL that has them. It's weird, but works for me.
HTH.
In retrospective, my way out of this was a four-pronged approach:
(1) The Breadwinner PL
You should know one language that's reasonably popular where you live, and which will always land you a job. In the past that meant C++, C, Java, C#, and maybe Perl and PHP, but these days you can probably also get away with Python, Ruby, JavaScript, or Scala.
What's important about the breadwinner PL is that you know its semantics and its standard library by heart.
(2) The C PL
Infinitely many good things come to you by learning C, because your OS is written in it. Studying C, you'll learn awesome stuff, like how to handle SIGSEGV, exploit your branch predictor, write a language runtime, and what a linker is, for example.
The more you learn about C, the more you'll learn about a wide array of services that are already provided to you by your OS and compiler. And C is fun and simple to boot, and will give you the warm fuzzy feeling that comes from building things (almost) from the ground up.
(3) The Romantic PLs
These are languages that you really like and if you're really lucky, might one day get paid for using. You don't have to make a choice - learn them all. They all have their pros and cons, and in the end you'll have to roll your own, anyhow.
(4) The Other Interesting PLs Out There
Also keep an eye on olden golden PLs and newcomers to the scene, even if you're never going to use them: they may blow your mind, and that's what you really should be looking for. Ωmega is a good example.
The Real World
I've come to start new non-hobby projects in my breadwinner language, because it's the most convenient, I know it by heart, all my problems have already been encountered and hopefully solved by somebody else, there are tons of libraries, and PLs seem to matter very little for many projects (and if they matter, you can always Greenspun it).
Motivation
The thing that keeps me motivated learning more about PLs are PLs themselves. I learned C writing Lisp->C compilers, for example. I'm learning about dependent types because I'd like to implement a PL that has them. It's weird, but works for me.
HTH.
On Python and Ruby
I often like to make snide remarks at Python and Ruby, because ... well, because they're not Lisp.
To offset this a bit I'd like to say what I find good and impressive about them:
Python seems to be a great language for describing algorithms. For example, Kragen's hacks abound with samples of Python code that is simply wonderful to read, and seems like exactly the way to go.
Ruby is to be congratulated for demonstrating once and for all that dynamically-typed, semi-functional, object-oriented programming is a fun and useful paradigm for systems scripting.
To offset this a bit I'd like to say what I find good and impressive about them:
Python seems to be a great language for describing algorithms. For example, Kragen's hacks abound with samples of Python code that is simply wonderful to read, and seems like exactly the way to go.
Ruby is to be congratulated for demonstrating once and for all that dynamically-typed, semi-functional, object-oriented programming is a fun and useful paradigm for systems scripting.
Sunday, January 2, 2011
Why Lisp is a Big Hack (And Haskell is Doomed to Succeed)
2013 Update: I was young and stupid when I wrote this.
I don't really love Haskell that much, but I track its progress with awe. (When I say Haskell, I'm not only speaking about Haskell per se, but also about all the FP languages in its halo, like Ωmega, Agda, Epigram, ...)
And when I look at Haskell, it seems obvious to me that it's out to eat Lisp's lunch. In fact, eat all other languages' lunches.
The gist of this post is: In the not-so-far future, Haskell will be able to do everything Lisp can do now (and more), but in an adjustably-safe, statically-verified manner.
What do I mean by that?
Adjustably-safe: In this mythical, not yet-existing, but clearly on-the-horizon "Haskell", you'll be able to choose how much safety you want. You'll have "knobs" for increasing or decreasing compile-time checks for any property and invariant you desire. Or don't desire. You'll be able to switch between Lisp-style dynamic typing, Haskell-style static typing, and probably even C-style weak/no-typing. In the same program.
Statically-verified: Haskell is clearly moving towards dependent typing, which in theory, allows the expression of arbitrary invariants that are maintained statically, without having to run the program. Dependent typing is the weirdest and most awesome thing to expect of programming, this side of quantum computers.
Lisp, as it stands, can't do any of that, and won't be able to do any of that. That's simply a fact. Why? Because it's coming at the problem from the wrong direction. Trying to graft an interesting type system or verification onto Lisp is simply too heroic and ill-specified a task. Lisp starts with ultimate freedom/unsafety, and you can't constrain it. Haskell starts with ultimate bondage/safety, and then, piece by piece, adds that freedom back. On a theoretically well-founded basis.
Right now, Lisp has certain advantages. As a command or scripting language where ultimate dynamism is desired (Emacs), it's still clearly superior. But Haskell is encroaching on its habitat from all sides, just like it's encroaching on the habitats of all other languages. Right now it may appear pointy-headed and harmless. But I think it's unstoppable, and doomed to succeed.
How does that make Lisp a big hack? If my theory is right, then once Haskell will be able to do everything Lisp can do now (and more), all the while maintaining adjustable safety and static verification, I think it will be justified to call Lisp a big hack - because it lacks the possibility of this safety and verification, in principle. (Of course you have to subscribe to the idea that this safety and verification is something that's good and superior. I do.)
(HN, Reddit)
I fear that Haskell is doomed to succeed. — Tony HoareI ♥ Lisp. I think it's the best tool we have, at the moment, for many applications.
I don't really love Haskell that much, but I track its progress with awe. (When I say Haskell, I'm not only speaking about Haskell per se, but also about all the FP languages in its halo, like Ωmega, Agda, Epigram, ...)
And when I look at Haskell, it seems obvious to me that it's out to eat Lisp's lunch. In fact, eat all other languages' lunches.
The gist of this post is: In the not-so-far future, Haskell will be able to do everything Lisp can do now (and more), but in an adjustably-safe, statically-verified manner.
What do I mean by that?
Adjustably-safe: In this mythical, not yet-existing, but clearly on-the-horizon "Haskell", you'll be able to choose how much safety you want. You'll have "knobs" for increasing or decreasing compile-time checks for any property and invariant you desire. Or don't desire. You'll be able to switch between Lisp-style dynamic typing, Haskell-style static typing, and probably even C-style weak/no-typing. In the same program.
Statically-verified: Haskell is clearly moving towards dependent typing, which in theory, allows the expression of arbitrary invariants that are maintained statically, without having to run the program. Dependent typing is the weirdest and most awesome thing to expect of programming, this side of quantum computers.
Lisp, as it stands, can't do any of that, and won't be able to do any of that. That's simply a fact. Why? Because it's coming at the problem from the wrong direction. Trying to graft an interesting type system or verification onto Lisp is simply too heroic and ill-specified a task. Lisp starts with ultimate freedom/unsafety, and you can't constrain it. Haskell starts with ultimate bondage/safety, and then, piece by piece, adds that freedom back. On a theoretically well-founded basis.
Right now, Lisp has certain advantages. As a command or scripting language where ultimate dynamism is desired (Emacs), it's still clearly superior. But Haskell is encroaching on its habitat from all sides, just like it's encroaching on the habitats of all other languages. Right now it may appear pointy-headed and harmless. But I think it's unstoppable, and doomed to succeed.
How does that make Lisp a big hack? If my theory is right, then once Haskell will be able to do everything Lisp can do now (and more), all the while maintaining adjustable safety and static verification, I think it will be justified to call Lisp a big hack - because it lacks the possibility of this safety and verification, in principle. (Of course you have to subscribe to the idea that this safety and verification is something that's good and superior. I do.)
(HN, Reddit)
random notes
- Great discussion over on LtU: The AST Typing Problem:
So to summarize, my basic argument is that the separation into distinct IRs may not necessarily reduce the overall complexity of your compiler, but it will certainly modularize it and make it easier to evolve.
As for the compilation speed target, coupled with a desire to use the static type system of the implementation language to enforce a higher dimension of correctness than seems to be borne out in existing compilers, and your overall goal of creating a new language, be careful not to ask for too many miracles :-) -- Ben L. Titzer
- Emacs 24 finally gets the much-needed
create-animated-image
functionality. - Good example of collaborative software design on LKML: [concept & "good taste" review] persistent store.
More fully-featured, modern Lisps, pulleezz
``Horsey Horseless''
While I don't like the tone of my earlier post, No more "minimal early Lisps", pulleezz, I stand by its core message: implementing a "minimal early Lisp" may actually be bad for you.
Why? An analogy: if you want to learn how to build a car, learning to build a horse carriage doesn't seem right. And the difference between "LISP" and modern Lisps is similar to that between horse carriages and cars.
IMO, modern "mainstream" Lisp (Common Lisp, Scheme, Dylan, Goo, EuLisp, ISLISP) is the king of dynamic languages. All the others (PHP, Lua, Python, Ruby, ...) are merely variations on the theme.
Compared to Haskell, Lisp is a big hack. But it has its uses. And over the course of decades, Lispers have accumulated a wealth of tricks to cope with the hackish character inherent in Lisp.
So if you reimplement an early Lisp, you're missing out on a lot of stuff, just like Lisp's "successors" in the dynamic language area do. It's a shame, really. IMO, if you care about Lisp, you owe it to yourself to not only learn about "LISP", but also what happened in the 5 decades since.
And I also think you owe it to others. The world is already plagued enough by bad programming languages. If you want to put out a new one, even if it's meant just as a toy, by all means do us all a favor and study history first.
Here's a list of some of the stuff that I think every Lisp (re-)implementer should care about:
Bindings
Understand the difference between SETQ and DEFPARAMETER for global variables.
Understand PROGV and BOUNDP. Ponder their absence in Scheme.
Understand Scheme's LET, LET*, LETREC, and LETREC*. (Yeah, Schemers are funny people. But sometimes they do have a point.)
Understand the difference between Common Lisp's dynamic variables and Scheme's parameter objects (SRFI 39). Compare to EuLisp's DYNAMIC-LET.
Understand why Clozure CL offers DEFSTATIC.
Macros
Understand Scheme's expansion process.
Understand hygienic macros, preferably SRFI 72.
Understand why some Schemes use full phase separation, while others like to collapse meta-levels (also).
Understand negative meta-levels. (Once you do, please tell me how they work.)
Control Flow
Understand CATCH/THROW, TAGBODY/GO, and BLOCK/RETURN-FROM, and how they can be implemented in terms of each other.
Understand UNWIND-PROTECT vs DYNAMIC-WIND.
Understand the condition firewall.
Contrast Dylan's, Goo's, and CL's condition systems.
Conclusion
These are just some of the ingredients shared by "mainstream" modern Lisps. They have been developed and refined over the course of 5 decades by some of our best minds. You cannot understand modern Lisp, its history, or contribute to its glorious future without knowing about them.
If you want to learn how to build horse carriages, implement a "LISP". If you want to learn how to build cars, study the items in this list.
</soapbox>
(Update: See the HN discussion for some clarifications.)
Subscribe to:
Posts (Atom)