Here's a fun idea, that should work in a language with fexprs, or an incremental/reactive macro expander. It relies on the property that EVAL evaluates the form in tail position. Tail-call optimization could be switched off and on during debugging for more useful stack traces, and even arbitrarily at runtime. The loss of useful stacktraces is not a non-issue, as witnessed by attempts at more useful stack traces in MIT Scheme and in SISC.
How it works: all expressions in tail position (e.g. the last expression of a BEGIN, the last expression in a lambda, the branches of an IF, ...) are automatically (by the parser or something) wrapped in a call to the operator TCOIFY:
(lambda () (foo) (bar)) becomes
(lambda () (foo) (tcoify (bar)))
(if (quux) (do-something) (do-something-else)) becomes
(if (quux) (tcoify (do-something)) (tcoify (do-something-else)))
etc.
Ordinarily, TCOIFY would be a fexpr that directly evaluates its argument, in tail position:
(def tcoify (vau (x) env (eval x env)))
When TCOIFY is defined thusly, (tcoify expr) is equal to expr for the purposes of TCO, because EVAL evaluates X in tail position.
For useful stack traces, one would define TCOIFY as a function. Argument evaluation needs to happen before application, so TCOIFY would prevent TCO:
(def tcoify (lambda (x) x))
By switching between the two definitions of TCOIFY at runtime, tail-call optimization can be turned on and off.
Tuesday, August 14, 2012
Subscribe to:
Post Comments (Atom)
8 comments:
"automatically wrapped by the parser or something"
That something simply could be an expansion-passing-style (EPS) macro processor.
Inlining and partial evaluation would defeat this. Also TCO is a misnomer - it's not an optimisation but a proper implementation strategy. Guy Steele, Will Clinger and Matthias Felleisen call it "proper tail call implementation." You can get stack traces in a tail-recursive machine using continuation marks.
Yeah, but those stacktraces are pretty useless in general.
Oh, and if a compiler optimization changes space complexity, it should be considered buggy, no?
The beauty of Expansion-Passing Style is that it can be used to perform in-lining and partial evaluation as well (at expansion time, at least), or it can insert markers to perform such things later at some effort.
A compiler changing space complexity should not be considered a bug if it is spec'd and conforms to spec, no?
This seems like a bad idea to me.
Thing is, "stack traces" are not an end goal. Debugging is the goal. Developers want to diagnose where things went wrong, which means having context, history.
Stack traces are an approximation of history, but a rather poor one. They won't tell you where a NULL value came from, if it was obtained by a query, for example. Stack traces are especially awful for software architectures that use queues of messages or events.
MIT scheme records the history directly. This is much better than a stack trace, closer to the goal for debugging. The weakness of MIT scheme's approach is that the ring buffer can easily fill with fine-grained local operations and lose important, coarse-grained information about the larger context.
What you really want is something like exponential decay, where we have progressively less information the further into the past we look. This could be modeled with multiple ring-buffers: e.g. every time the inner ring-buffer reaches element 0, the prior element 0 goes to the outer ring buffer. Sort of an odometer for debugging.
Don't get distracted by "stack traces" just because they're traditional. Keep your eyes on the prize.
'Don't get distracted by "stack traces" just because they're traditional. Keep your eyes on the prize.'
That is an excellent reminder.
TCO is an optimization in that it uses a less general but more efficient implementation for some calls, even though the more general but less efficient implementation would also work.
If you define the semantics of calls to 1) specify space complexity and 2) do so differently depending on whether they're in tail context, then you can make tail calls officially Not Just An Optimization But A Matter Of Correctness. Of course this is a less modular equivalent to describing TCO as a guaranteed optimization. This lack of modularity helps to justify some arbitrary terminology, but I don't think it has any other advantages. That explains why no other optimization is commonly described this way.
Post a Comment