Paul made his point clear:
My thesis is that programmers' ability to be logically inconsistent in a formal sense at any stage of development is overrated. #
To which I replied, equally grandly:
Without the ability to be inconsistent, you can't support the Lisp Experience™ #
My point is this: current type systems treat type checking as a batch job. But:
Intermittently, during editing, programs are nonsense. then we bring them back into consistency. type systems should support this #
I'd really love to treat type system outputs not as one-off errors that prevent running the program, but rather as a dynamic console of warnings, that updates as I change the - running - program. Every warning corresponds to a point in the program where I might get a runtime error if I were to run the program at this moment.
Update: James Iry blogged about the same conversation in Type Errors as Warnings.
Update 2: Paul Snively added another nice quote:
Update 3: Great minds think alike. Brian McKenna reports:Simon Peyton-Jones talked about this a couple of times with me. It's something he's going to try with GHC #
8 comments:
Maybe this this be on your list of predictions for 2012. Flexible, progressive type systems have been discussed for some time now.
As a background, it seems fundamental that inconsistency is the price of flexibility. We have to allow *some*. We cannot have any reusability without allowing the possibility of inconsistency.
'Dynamic' type warnings seem an advance we really should have. But it would not be the end of the matter, of course. You could still have either: warnings for inconsistent edits, or: not permit program transforms that would create inconsistency (to some degree -- you know, like a type system). And everyone would then argue about *that* choice!
Interestingly, a fair number of Lisp programmers I know use "paredit" mode for Emacs. This makes the intermediate stages of development *syntactically* valid at all times, even though it's not type-correct.
I've never heard of a curly-brace-language programmer using something like paredit. (It'd probably have to be a lot more sophisticated, since the syntax for those languages is much more complex.) You can break things as bad as you want, as long as it's absolutely perfect at compile-time, when everything is frozen.
This seems closely related to the work that I saw Mike Ernst present on very recently. He made a really compelling argument: in a compiled setting, we have a 'run' button that is 'greyed out' until the *whole program* type checks. In a interpreted setting, we can press the run button whenever we want. I believe his system (which also has an interesting implementation based on cute uses of reflection) is an extension to Java.
Thanks for the pointer -
http://code.google.com/p/ductilej/
To Kristopher's point, I'd say: in an environment which allows both compilation and interpretation, would you even use compiled mode?
CPUs are plenty fast these days. Between the interpreted and compiled languages I use, I rarely think "I wish I had more runtime speed" in the former (while I frequently think "I wish I had a way out of this type system" in several of the latter). Compilation these days, for me, is simply mandatory busywork, when it's an explicit step.
The compiler also happens to do static type checking, but I would opt out of that, if I could. That's a side effect of being a compiler, not something I want. In daily use, I rarely have a program where the speed to be gained from compilation outweighs the time lost by running the compiler.
Why is it not enough to have a value like Haskell's "undefined"?
Anyone who argues for consistency is actually arguing for islands of local consistency (sometimes called `my program`) in an indeterministic inconsisten-sea (sometimes called `the world`). I believe such consistency barriers should be acknowledged in our language design - i.e. by abstractions or systematic mechanisms to prevent, isolate, tolerate, and gracefully recover from inconsistency.
I suspect the experience in such a system becomes more modular and atomic. I.e. an IDE could recognize checkpoints where new code can be thunked in to replace the old stuff. Usefully, developers would have much control over their own development experience.
Post a Comment