Tuesday, November 30, 2010

Wow

A Turing Machine in the classic style:



Here's the awesome video:



(via LtU)

Systems programmers relax, most advice isn't for you

I'm an incredibly slow programmer (in the hobby projects I do for the love of it, that is). My ratio of thinking to typing must be around 10,000 : 1. Just to give you an indication of how slow I am: there's this one nagging project on which I've been working more or less continuously since... 2001.

Thus, programming advice of the DTSTTCPW variety (i.e. 99% of programming advice) always makes me a bit uneasy. Instead of thinking - shouldn't I just be crankin' out code, putting it online, and blogging about it on LJ?

But a couple of days ago I made an observation: almost all programming advice applies to applications, but what I'm mostly working on are systems. Since then, I'm sleeping better.

As Dr. Theodor Holm Nelson says, you can't get to the moon by piling up chairs. You may be able to learn something about an application by cranking out a quick and dirty prototype. But for systems, prototyping seems futile. Either your ideas work, or they don't, and a quick gedankenexperiment will usually be enough to find out.

Of course, I don't want to denounce experimentation as worthless. You have to know whether your ideas will be useful and work on really existing computers. But if your system is so removed from reality or complex that you're unsure about that, you're probably doing it wrong.

Take Unix pipes. There's an early memo about the need for pipes from 1964. And in 1972, pipes appeared in Unix. Do you think they spent 8 years DTSTTCPWing? I don't think so. I think that in these 8 years they spent a lot of quality time thinking about the issue, and when they had the right idea, they just knew it was right, and went ahead and implemented it.

So, to all systems programmers out there: most programming advice isn't for you. Relax, and take the time to find the right ideas.

That looks like one fine book

(via proggit)

Thursday, November 18, 2010

Numbers Everybody Should Know

Julian Hyde graciously transcribed the following table from Jeff Dean's Stanford talk:

L1 cache reference0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns
Mutex lock/unlock 25 ns
Main memory reference 100 ns
Compress 1K bytes w/ cheap algorithm 3,000 ns
Send 2K bytes over 1 Gbps network 20,000 ns
Read 1 MB sequentially from memory 250,000 ns
Round trip within same datacenter 500,000 ns
Disk seek 10,000,000 ns
Read 1 MB sequentially from disk 20,000,000 ns
Send packet CA->Netherlands->CA150,000,000 ns


For more details: Agner's Software optimization resources.

Foundations for Structured Programming with GADTs

Makes me wish I understood category theory: Foundations for Structured Programming with GADTs (neelk's LtU post) by Patricia Johann and Neil Ghani @ POPL 2008.
GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve this problem by showing that the standard theory of datatypes as carriers of initial algebras of functors can be extended from algebraic and nested data types to GADTs. We then use this observation to derive an initial algebra semantics for GADTs, thus ensuring that all of the accumulated knowledge about initial algebras can be brought to bear on them. Next, we use our initial algebra semantics for GADTs to derive expressive and principled tools -- analogous to the well-known and widely-used ones for algebraic and nested data types -- for reasoning about, programming with, and improving the performance of programs involving, GADTs; we christen such a collection of tools for a GADT an initial algebra package. Along the way, we give a constructive demonstration that every GADT can be reduced to one which uses only the equality GADT and existential quantification. Although other such reductions exist in the literature, ours is entirely local, is independent of any particular syntactic presentation of GADTs, and can be implemented in the host language, rather than existing solely as a metatheoretical artifact. The main technical ideas underlying our approach are (i) to modify the notion of a higher-order functor so that GADTs can be seen as carriers of initial algebras of higher-order functors, and (ii) to use left Kan extensions to trade arbitrary GADTs for simpler-but-equivalent ones for which initial algebra semantics can be derived.
Mentioned by Oleg in GADTs in OCaml.

Wednesday, November 17, 2010

Boltzmann Samplers for the Random Generation of Combinatorial Structures

To create complex random data objects and shapes like these, use Boltzmann sampling.


From the paper Boltzmann Samplers for the Random Generation of Combinatorial Structures.

While we're at it, Z-order space-filling curves are hot:



In 3D:

Why are objects so unintuitive

In the LtU permathread Why are objects so unintuitive?, Jonathan Shapiro adds another piece to the puzzle:
All of the mappings we have so far from real-world problems to software realizations are either bad (in the sense of imprecise and convoluted) mappings or map to pretty bad models (in the sense that the models are either devoid of semantics or have semantics too complex for the working programmer to really understand). OO "works" because it is the least-bad and least-incomprehensible mapping that has been identified to date.
The thread is also notable for David Barbour's treatise Being poor at modeling is essential to OOP:
The poor modeling facilities of OOP are caused by essential properties of OOP. These essential properties include encapsulation and message-passing polymorphism. These are the same properties that make OOP powerful for configurable modularity, scalability, and object capability security. ...

I do not believe that modeling 'tax reports' in OOP is something that we would 'want' to do. Ever. We might need a tax-report value object as an artifact of the dataflow system (e.g. if tax-reports are dumped into the system as complex form objects), but that's just plain-old-data and could be done just as easily in functional or procedural, and is really not 'OOP'. ...

Your comments (modeling the situation, wanting to model the 'tax reports' explicitly even when it is unnecessary, etc.) and those like them (being extremely prevalent in OO books and the education), are what lead people to start each program by writing a business simulator when what they want are business data processors.

Thursday, November 11, 2010

Dependent types linkdump

This dependent typing thing has me. For the first time in years I find I'm actually learning something about programming. There is life after the Observer pattern.

Here's a list of some papers I've found worthwhile or just found, and not read yet...

Fun with Type Functions (slides). Oleg Kiselyov, Simon Peyton Jones, Chung-chieh Shan. 2010.
How to fake it with Haskell's recently added type families. But Haskell increasingly looks horribly impure: it's not just that every type's inhabited by bottom, type-level programming is untyped! I think they really should work on their purity! ;) Mentions Ωmega's kind system - my prediction: typed type-level programming is coming to Haskell soon.
Proof-assistants using dependent type systems. Henk Barendregt, Herman Geuvers. 2001.
Recommended as a great intro to all things dependent typing. Not by me though, as I still have to read it.
ΠΣ: Dependent Types without the Sugar. Thorsten Altenkirch, Nils Anders Danielsson, Andres Löh, Nicolas Oury. 2009ish.
What's under the sugary hood of dependent programming languages. For when I develop a dependent language - because I'll have to.
A Tutorial Implementation of a Dependently Typed Lambda Calculus. Andres Löh, Conor McBride and Wouter Swierstra. 2009.
Also frequently mentioned as an intro to dependent types.
Verified Programming in GURU. Aaron Stump. 2009.
A book on dependent programming with a focus on verification.
Generic Programming with Dependent Types (part II, part III). Stephanie Weirich. 2010.
Slides focussing on the extra expressivity afforded by dependent types. Agda.
An Epigram Implementation. Edwin Brady, James Chapman, Pierre-Evariste Dagand, Adam Gundry, Conor McBride, Peter Morris, Ulf Norell, Nicolas Oury. 2010.
The 300-page literate Epigram (2?) implementation.

Sunday, October 31, 2010

Concurrency as Basis for Scalable Parallelism

David Barbour's Concurrency as basis for Scalable Parallelism may change the way you think about programming:

It has been observed that many 'concurrent' applications fail to scale beyond a few parallel components. In many cases, some sort of data-parallelism is feasible (e.g. parmap, SIMD, GPU) and we should certainly be able to leverage those opportunities! But I'd like to address just the concurrency aspect - the coordination of diverse systems and interests - and argue that even that is a sufficient basis for scalable parallelism, assuming we leverage it properly.

The scalability of concurrency as a basis for parallelism comes in realizing that the number of relationships in a system can rise O(N^2) with the number of components. Thus, we need to embed most significant computation into the relationships, rather than the components. ...

When most computation is moved to the relationships, the resources used by a service will scale commensurately with the number of clients it directly services - and vice versa; resources used by a client will be commensurate with the number of services it directly uses.
It changed my thinking: from "ouch, this is a lot of data to process" to "wow, lots of opportunities for concurrency, and thus parallelism".

Monday, October 25, 2010

Notes on Dependent Types

Since discovering Ωmega, I've been drawn into the fantastic world of dependent typing. Dependent types allow the expression of static invariants that your program is then guaranteed to respect dynamically. This works up to computation: proofs of static invariants are programs, so arbitrary invariants may be expressed by the programmer, and verified by the type system. I repeat, arbitrary invariants. If you're not salivating yet, you might want to skip this post.

A couple of systems I've (superficially, so far) looked at are Ωmega, Epigram, Coq, and Agda.

Ωmega is closest to existing FP languages like Haskell, which makes it easier to grok than the others for newbies - it's not a full-spectrum dependent type system by design. Technically speaking, Ωmega is "faking it" by maintaining a strict phase distinction between the value and the type level (and the kind level ...). Proofs are represented as runtime instances of GADT datatypes.
GADTs, or generalized algebraic datatypes, are a more powerful (generalized) form of ordinary ADTs, such as ML's 'a list, that are used as datatypes (or "object system") in many dependently typed systems. GADTs are very closely related to generics in Java and C#; understanding generics and their proposed higher-level extensions thus provides a gentle route into dependent types.

Java and C#'s object systems already have all of the power of GADTs, only they're not as easy to use, and they are not as typesafe. What separates GADTs from ADTs can be seen in this Java code:

class A<T> {}
class B extends A
<Foo> {}
B extends A with T fixed to Foo, so instances of B have type A<Foo>, by subtyping. With ordinary ADTs, B would also need to be parameterized by T - individual type constructors of an ADT, which we've expressed here as Java classes, cannot fix a type parameter to a specific type. GADT is attractive because it's a modest extension of existing ADT theory, and yet enables a wide range of data structuring styles used in fancypants typeful programming.
Epigram is programmatically aligned with Ωmega - both are passionate about the advances in programming made possible by dependent types, and vocally call programmers and researchers to join them on their quest. Epigram, in contrast to Ωmega, goes whole hog dependent typing. This includes arbitrary type-level computation, a succinct 2D-layouted syntax, and interactive support by the development environment, that goes beyond what IDEs for ordinary languages offer (and can offer).

Coq is different from Ωmega and Epigram in that it's designed as a proof assistant, and not a dependent programming language. Adam Chlipala maintains that Coq is the most useful implementation to date for expressing complex proofs, and that systems like Epigram are more useful when proofs arise very naturally from the structure of the program and/or its data. Coq programs, such as Xavier Leroy's verified C compiler, have a different flavor from ordinary and dependent functional programs. Concoqtion has an O'Caml value level, and a Coq type level.

One problem with all dependently typed systems is that you have to deal with non-terminating type-level programs somehow, otherwise your logic becomes unsound. There are different strategies: Ωmega type-level programs are written as term rewriting systems, that have to terminate; Epigram and Coq programs are written in total languages - every program terminates; in Agda, programs are interrupted when they get stuck in a loop - soundness is regained through a separate termination check (I think). In practice, nontermination doesn't seem to be a big problem.

Some of the most interesting applications of dependent types are in the area of metaprogramming by program generation: you prove interesting static properties of your program, and then spit out a program in another language that is guaranteed to obey them dynamically. Which means that you can throw away and erase a lot of stuff at compile-time, that other languages have to remember and/or check at runtime. As Epigram implementers state, they have more information available about their programs statically, than they know what to do with. Consider: in a dependent language, once a program typechecks, you may not ever need to run it!

The existing systems for doing dependent programming all seem very closely related, and "conceptual epiphanies" can be translated between them. Dependent types are an amazing area of research & development, and I urge you to join it.

Update:

Stephanie Weirich maintains that dependent types not only aid verification, but they also increase expressiveness of a language, and that generic programming is a killer-app for dependently-typed languages.

Also: what are dependent types actually good for? I think a good example is found in Why Dependent Types Matter: it shows how to write a list-sorting function, whose type proves that the output lists are always correctly sorted. I think a lot of software will be written this way in the near future.

Wednesday, October 20, 2010

Keeping up with the GNUses

Just wanted to mention that for keeping up with GNU/Linux progress, the following two sources are very good:

Tuesday, October 19, 2010

Split stacks!

This could be big - in the October 2010 GNU Toolchain Update (subscribe!), Nick Clifton reports that split stacks have been added to GCC. With split stacks, thread stacks can grow (and shrink (?)) automatically.

Monday, October 11, 2010

al-Khwārizmī



I think that's a great statue showing the act of algorithm design.

Monday, September 27, 2010

Program Your Size

When programming, I often find myself striving for minimalism. Trying to make the program as succinct as possible, sweating over details like how to arrange the sections in a file, or the number of files to create.

But minimalism in programming may cause programs to appear cramped, may persuade you to choose forms of expression that are unneededly terse, and may also lead to programs that are hard to refactor or extend.

Point is, when you start programming, you don't know how big the program's gonna be, so starting with minimalism may be a bad choice. When looking at the sources of some huge programs, I found that some of them have found very relaxed ways to deal with their sprawling size. Gnus for example comprises dozens of large source files, containing hundreds or thousands of operational definitions, and yet remains very readable.

I don't want to get into the details of architecting large programs - I'd rather like to focus on a way to write programs, from the start, so that the source then has a "density" that's adequate to its size. Gnus for example is meandering, and that's perfectly adequate - Gnus being a mail reader for the most customizable OS in existence. That's a job that's never done, so the source should reflect that - you can't write a cute blog post about a program like Gnus, there's no single overarching, minimalist design principle.

Maybe, as Perlis said, every program needs to be written at least twice. After the first run, we know the approximate size of the program, and can then find a code density/terseness that's adequate to that size.

I find minimalism in source code is often limiting. In my next program, I'll write it from the start as if it were really huge.

Linus on transactional memory

I want transactional memory like the next guy, but comments like this one by Linus indicate that we still have a long way to go:

So that's what it boils down to: transactions are "free" and a wonderful way to elide those horrible expensive locks.

But only if you never make a mistake.

They are expensive as hell even for very low rates of transaction failures. And you really cannot know statically (even if you don't end up reaching some transaction limit, you may easily end up just having heavy contention on the data structures in question).

So I claim that anybody who does transactional memory without having a very good dynamic fallback is basically totally incompetent. And so far I haven't seen anything that convinces me that competence even exists in this area.

Monday, September 20, 2010

The ghosts of blog posts past

Ah, blogging. On the one hand there's value to reading someone's opinion unfiltered and raw. On the other hand it creates a lot of friction, and pisses people off.

This is a blog. It is a special medium. I'm not writing articles, which I expect to stand the test of time. I don't let my friends preview the posts I write, and point out their problems. Because I think there's a difference between writing articles (as, say, Paul Graham does) and blogging. And both have merits. This is a blog.

It presents my opinions and mood at a specific point in time. If you don't like it, argue with me, but don't take it too seriously. Take my posts like snarky or humorous remarks over a beer. That's how I write them. Because that's what I think is one of the things blogs are good for, and that's how I'm running this particular blog.

-- Manuel

Monday, September 13, 2010

The far side of the lambda cube

Of course, once you investigate higher-order types, you begin to think about unifying the type and term levels.

Reading a bit about this I found some interesting links:

Sage: Unified Hybrid Checking for First-Class Types, General Refinement Types, and Dynamic. A recent paper I found via Ahmed, Findler, Siek, and Wadler's new Blame for All. I find the use of a counter-example database in a compiler a bit frightening, though.

Henk: A Typed Intermediate Language, from 1997, by SPJ and Erik Meijer:
There is growing interest in the use of richly-typed intermediate languages in sophisticated compilers for higher-order, typed source languages. These intermediate languages are typically stratified, involving terms, types, and kinds. As the sophistication of the type system increases, these three levels begin to look more and more similar, so an attractive approach is to use a single syntax, and a single data type in the compiler, to represent all three.

The theory of so-called pure type systems makes precisely such an identification. This paper describes Henk, a new typed intermediate language based closely on a particular pure type system, the lambda cube. On the way we give a tutorial introduction to the lambda cube.
Languages of the Future, by Tim Sheard on Omega, on which I'll post something soon.

And, Your lambda-cube is puny.
A moment of respectful silence, please.

Sunday, September 12, 2010

Higher kinds are sexier

In the process of formalizing generics for my Lisp, I'm also studying FGJω again, the extension of Featherweight Generic Java with higher kinds. Instead of just parameterizing types with first-order types, such as T, you can parameterize types with type-level "functions", such as F<T>. One example of such a function is the type List<T> - you give it a T, and it "returns" the concrete type of lists containing that type.

And it turns out, the theory behind that looks simpler, and thus sexier:
(types)             T ::= (K K*)
(type constructors) K ::= X | C | (P* => T)
(type param defs) P ::= (X P*) | (<: (X P*) (C K*))
(class defs) D ::= DEFCLASS (C P*) ((C K*)*)
(method defs) M ::= DEFMETHOD (m P*) ((param T)* -> T)
I like the definition of types: a type's the application of a constructor to zero or more other constructors. Constructors may be variables (X), classes (C), or functions taking zero or more parameters and returning a type. (So you basically have a lambda calculus at the type-level.) A type parameter is a variable constructor with zero or more parameters, optionally bounded to a class type with zero or more type arguments. Class definitions are parameterized over zero or more type parameters, and extend zero or more existing classes. (I'm taking some liberties with adding multiple inheritance, hoping it doesn't mess up the theory. :P) Method definitions are, as usual, parameterized on the type- and term-level and have a result type.

It's kinda embarrassing to post this stuff here, since I'm so bad at type theory, but I think I can make it work. One insight giving me this faith is that none of this stuff survives till runtime: at runtime, only concrete instantiations, such as List
<String> remain. Or so I hope.

Friday, September 10, 2010

Formalizing generics

In the previous post, Minimal generics for an untyped language, I've discussed some basic requirements for the implementation of parametric polymorphism in the Lisp I'm working on.

I've now made some progress in formalizing this, based on Featherweight Generic Java. Here's a basic overview of the type system in the FGJ style:
(types)       T ::= C | X | Top
(classes) C ::= (klass T*)
(class defs) D ::= DEFCLASS (klass (<: X C)*) (C*) ((slot T)*)
(method defs) M ::= DEFMETHOD (method (<: X C)*) ((param T)* -> T)
Types are either class types (C), type parameters (X), or Top. Class types have a name (klass) and zero or more types as parameters. A class definition introduces a named class, parameterized over zero or more type parameters that have class types as bounds, zero or more superclass types, and zero or more named and typed slots. A method definition introduces a named method, parameterized over zero or more type parameters that have class types as bounds, with zero or more named and typed parameters and a result type.

Type parameters in a class definition or method definition (X) are scoped over the entire definition, including the bounds (in the style of F-bounded polymorphism).

All of this is pretty much equivalent to FGJ. The only difference is that classes do not contain methods.

[Updates: I've added type parameterization to method definitions. Added Top. Added some parentheses to make the syntax unambiguous.]

Wednesday, September 8, 2010

Minimal generics for an untyped language

So, I'm developing a Lisp, which means I have to support untyped, dynamically type-checked code in any case. But I still like typeful programming.

Generics, or parametric polymorphism, are jolly useful, even on a totally limited scale: all I want from the first round of this endeavor is to support something like the following:

(defclass (list T))
(defmethod add ((list (list T)) (element T))
...)

Here, I define the class (list T) - a list parameterized over the type of its elements, T. That's the same as Java's List<T>. The method ADD takes such a list and an element, and adds it to the list.

To create an instance of a parameterized class, you have to pass a concrete type argument to use for the element type, as in:

(defvar *my-list* (make (list string)))

This creates a list that accepts strings (and instances of subclasses of string) as elements.

The instance has to remember that it's a list of strings for type safety. When ADD is called on a list, the runtime needs to check that the element type matches:

(add *my-list* "foo") ;; OK
(add *my-list* 12) ;; runtime error

One more thing I'd also like is the following;

(defclass (foo (<: X bar)))

That's a class foo that has one parameter, that must be a subtype of bar, analogous to Java's Foo<X extends Bar>. Being able to give such a bound to a type parameter seems quite essential.

Furthermore, it has to be possible to pass on type parameters to superclasses, as in:

(defclass (super-1 T))
(defclass (super-2 T))
(defclass (klass X Y) ((super-1 X) (super-2 Y)))

(klass X Y) is parameterized over two types, X and Y, and passes them on to its two superclasses, each of which is parameterized over one type.

Of course, it also has to be possible to remove parameterization, as in:

(defclass string-list ((list string)))

This defines an unparameterized class string-list, which is a subclass of (list string).

Another requirement is that it has to be possible to create instances of type arguments, as in:

(defclass (klass T))
(defmethod make-a-new-t ((k (klass T)))
(make T))

That's just cute, and may have some applications to dependency injection.

I'd also like to be able to write polymorphic functions:

(defun identity ((a T) -> T) a)

The arrow indicates the result type of the function. In this case it takes an instance of any type and returns it.

It should also be possible to give slots (member variables) of classes the types of type parameters:

(defclass (klass X Y) ()
((slot-1 X)
(slot-2 Y)))

This defines a class with two type parameters X and Y, (no superclasses,) and two slots slot-1 and slot-2 with the types X and Y, respectively.

These are the basic requirements. Now I need a plan! ;)

Tuesday, September 7, 2010

Fast dynamic casting

Fast dynamic casting, a cute paper by Michael Gibbs and Bjarne Stroustrup:
We have demonstrated that it is possible for a linker to generate integer type IDs for classes such that it may be verified by a simple integer modulo computation that one class derives from another in an object-oriented language. When combined with a suitable way of adjusting offsets, this method provides for a fast, constant-time dynamic casting algorithm. A 64-bit type ID is capable of representing quite large class hierarchies containing thousands of classes and at least nine levels deep.
I knew that prime numbers could be used for this somehow, but these guys have already worked it out.

Relatedly, I've also been thinking about using perfect hashing to implement multiple dispatch. It should work just fine, and actually reduce the number of type tests, compared to single dispatch. Unforch, I'm still in the make-it-work phase, which is followed by the make-it-correct phase. Only then does the glorious make-it-fast phase start.

Monday, September 6, 2010

Programmer feel-good quote

From a bit to a few hundred megabytes, from a microsecond to a half an hour of computing confronts us with completely baffling ratio of 109! The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology. He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.E.W. Dijkstra

Sunday, September 5, 2010

The three great virtues of programming language designers

Diligence, Patience, Humility.