Sunday, June 22, 2014

Attack of the Monadic Morons

[Deleted. To be replaced with a proper rant at another time.]


Michael said...

As far as I can tell (being somewhat new to statically typed FP) monads are important because they're how you assign a type to what would otherwise be a side-effect. If the language doesn't enforce strict typing, or FP, you don't need monads (or, at least, you don't need to care). It's only at the intersection of the two paradigms that monads become something the programmer actually has to understand. So they're a requirement for enabling two other things that are independently useful tools for programming.

John Shutt said...

I'm skeptical of your premise that types are a useful tool for programming. A little typing can seem useful, but tempts you into more; by the time you get to monads, I'd say you're well past the point where types have turned into red tape and become more of a liability than an asset. I hadn't thought of my discontent with monads in terms of my discontent with types, but now that you mention it they do seem connected.

Confusion said...

I am so with you on this. I've stopped listening to several interesting people because I could no longer stand the deluge of snide and derogatory remarks about anything not deemed sufficiently like Haskell.

Mark Needham said...

Monads are useful for more than side effects. In general, a monad is simply a specific way of connecting computations. In a lazy language like Haskell, evaluation order is unpredictable, so the IO monad is needed to connect effectful computations. However, there are monads for things other than IO, and some of them are potentially useful even in non-lazy or non-statically typed languages. For example - you can connect potentially failing computations with the 'Maybe' monad such that if any one computation fails, the entire thing fails. Or you can connect nondeterministic computations a la 'amb' with the List monad. Or there are monads for parsing, logging, probabilistic computations, and more.

Monads are a fairly rich paradigm for composing computations, and frankly, I don't think you understand what you're criticizing.

Roly Perera said...

Programming without types? What next? Breathing without air?

Ramakrishnan VU3RDD said...

Roly: Well, I thought the comment was not against types, but against the FP folks who are taking it to the extreme and portraying it as the *only* way to write correct programs. The "Bible salesman" quote is so apt here. If you follow some of the FP folks on twitter, I think they are doing more of evangelism than actually writing code in whatever FP language they are evangelising.

John Shutt said...

Yeah, I understand Manuel as objecting to FP evangelists' attitude toward monads, rather than necessarily types as such. Though I was criticizing types; Michael had highlighted out the monad–type connection, evidently intending monads to borrow some desirability from types, but I saw the connection as types borrowing undesirability from monads. (I've also had doubts about types, separately from monads.)

Though I'd be more inclined to compare programming without types to flying without water.

Michael said...

John, I have a hard time imagining programming entirely without types. Do you really mean "programming without types" altogether? Can you give an example? Or do you just mean fairly relaxed dynamic typing like you find in PHP and JavaScript?

In my limited experience with Haskell I found it pretty hard to work with, but I find the "if it compiles it will probably work" claim to be compelling.

Anyway, the idea that if you restrict each unit of code (read: function) tightly enough that the analyzer can figure out what fits together using semantics alone is interesting from a CS perspective, even if it has diminishing returns in the real world.

patrickdlogan said...

Smalltalk does not have "types", dynamic or otherwise. Programming without types in Smalltalk is easy... it's sending messages to objects, and objects handling those messages. Which messages any given object responds

Contrast this with Scheme which by and large has a fixed set of types and procedures that check those types dynamically at runtime. I've done a lot of Scheme programming going back to the early 1980s. (Although one of the first things I do is determine which OOP extension I will use, typically TinyCLOS: )

Powerful, though "type safety" folks generally do not understand this way of programming and run from it. I'm happy for them to be happy over in their world. I keep track of it, try it out once in a while to see whether I find it more appealing than my previous experiments. Someday, maybe. Doubtful it is soon though.

Will said...

" A little typing can seem useful, but tempts you into more; by the time you get to monads, I'd say you're well past the point where types have turned into red tape and become more of a liability than an asset."
-John Schutt

When I think of "a little typing", I think of Java or Go, which I feel is the worst of both worlds (that is, of dynamic typing and strong static typing). If your type system is too simplistic, then in order to express many useful things you must work outside the type system (e.g. in Java passing 'Object' everywhere and using reflection, or using XML "config" files) which lacks the safety of a typed language and the succinctness of dynamic language (and has horrible performance to boot). Pluggable types my be the best overall solution, but I don't know enough to be certain.

Furthermore, this is really separate from the notion of whether concepts from category theory are useful enough to add to the language of software engineering. Because all of this, after all, is really about developing a language (not just a "programming language") for engineers to articulate a collective understanding of problems and their solutions. Being a Java developer by trade with no formal education in category theory/abstract algebra, my answer is "Yes sir, may I please have some more!"

patrickdlogan said...

I agree we need more models, more tools, and better ways to talk about programming and software. Category theory has been and should be a theory underlying some of that But when that becomes the predominant theory shaping everything, we run the great risk of not looking forsomething better, when we have strong evidence already of something better. We've only just started and we should not be too certain where this could all be in a hundred years or so. These are uncharted frontiers.

John Shutt said...

By the time you reach Java's type system, I suspect, you're already getting tangled up. The first two languages I learned (not counting assembly languages) were BASIC and Pascal; those had simple type systems. Scheme, likewise — though when deriving Kernel from Scheme I dabbled a bit in user-defined types.

The thing is, when you start getting simple enough with your types, you start to lose track of some of the characteristics that make them "types". Smalltalk may be a good example, because whether or not it has "types" is pretty clearly becoming a matter of just how you choose to define "type".

In my semi-rant on this subject (on my blog), my main point was that in my math classes I never encountered types as experienced in programming. Sets, yes, algebraic structures of various kinds yes, but they lacked some qualitative type-ness. And in math, when you built one kind of structure from another, the abstraction from one to another was perfect. If just at the moment you're not treating functions as subsets of the cartesian product of domain and codomain, then just at the moment they aren't such subsets. Another time they might be. If you start with some structure A and use it to define another structure B, then use B to define C, and C to define A, you really get back to A, not some imperfect simulation/approximation of it. This is related to recent remarks on LtU (somewhere that I can't seem to find atm) that the reason abstractions fail is that in programming everything has a cost. Sets and their mathematical ilk are somehow lightweight whereas types in programming are heavy, and when trying to make the types "light" while still dealing with the stuff of programming, one ends up with type infrastructure so elaborate it becomes a new problem to replace to the one it's supposed to solve.

It's a clue, I think, that when you back up enough to extract yourself from the typing tarpit, the definition of "type" starts to wobble. I'm not just looking to program "without types"; I think one ends up using something subtly and profoundly different from types, something hard to even envision from within the typing conceptual framework. As I've remarked, we may not be asking the right questions yet.

Zankoku Okuno said...

Drat, I've missed the rant. Well, I can always add thoughts to the next one:

I've programmed in both dynamically- and statically-typed languages at length (starting w/ Scheme at the impressionable age of 10, keep that in mind).

A Java-esque type system (even discounting workarounds for initial design flaws) is in the uncanny valley of pain and uselessness. We shouldn't even think about these systems in language design, except to laugh at the flaws and ensure we never make the same mistakes.

The Haskell type system is expressive enough for roughly 95% of the code I write. There are definitely some flaws, about a third of which comes from using typeclasses instead of ML-style modules, another third of which is solved by the rank-n type extension, and the final third I would normally implement in a dynamic type system, except that usually some overly-clever person has made a library to hide the mess (looking at you, Data.Vault).

That said, I rarely jump on metaprogramming in my code, but if that's your cup of tea, then I would go for a dynamic language in a heartbeat. I understand Template Haskell is cool, but the error messages are crap and it suffers to massive extent by not being homoiconic,

The thing that draws me to system-F derivatives is that I can explicitly write down a ton of architecture in the types and have it checked without wasting time writing additional testing code. This gets me two things: 1) I can examine how the entire system fits together without writing the entire implementation, and 2) I can rip out the architecture and replace it with a different one with astonishing ease.

In contrast, in a dynamically-typed language, I need to have the entire system architecture built up front. If I make a mistake, the language doesn't help me, so it's best to hand-check it up front, but the more mistakes I want to catch, the more painstaking the process. This point was driven home for me recently when I attempted to build a web framework, at first in a dynamic language. As my requirements came into better focus, it became apparent that a rewrite was more economical than a refactor, and more insight was likely on the way, so I ported to a static language.

Dynamically-typed languages are necessary for those coding edge-cases, but for average code, odd as it sounds, type systems benefit experimentation and evolution. That said, this is all just my personal feeling based on experience. Perhaps it's just that I tend to think top-down, or just my level of sloppiness. In any case, it's not the blub paradox, since not only use lisps, I'm writing one (based on Kernel and your xons idea among other things, btw, both of which I found off your blog, so thanks).

Monads aren't inherently a problem for a lazy language like Haskell. The problem is that they're explained very poorly, so it takes too damn long to grok them (months >.<). I've seen considered opinions that monads aren't good in ML derivatives, but haven't examined it myself. The idea that "Monads are the One True Way, all others be cast into the Pit" is a straw man. But if you are using Haskell, then it's not too far off the truth, but also not a big deal, because once you're comfortable, they're quite convenient.

Nightstudies said...

I have to admit that I lack any experience in Haskell and I came in after the original post was deleted but I'll mention a few ideas I have on monads:

1) they're necessary in Haskell for sequences of commands - lazy eval needs a way to give commands a dependency on their antecedent. It needs a root to the program.
2) they've been described as a programmable ";" (C end of statement op) - and that is very useful. The List monad gives an example of how far that can be pushed... Though there may be a better way to do this. I'm interesting in metaprogramming that is more explicitely programming and more expressive than type matching schemes even with very powerful types...
3) The fact that it's called a "monad" seems a bit silly. A monad is a binary operation that has an identity element. If you have a way of processing lists of statements, then if there was no "identity" element to start with, the loop which processes statements would have to be built with a few checks to handle the first statement differently than others.. Taking an "if" out of a loop hardly seems like the essense of what you're doing here.

John Shutt said...

@Nightstudies, there are (at least) two different meanings of "monad" in current use in math. (It seems to me a big chunk of the study of mathematics is a specialized branch of linguistics; but I digress.) A techreport I wrote about the Haskell-related kind of monads some time back: pdf.

Unknown said...

It's a shame this has disappeared. There is so much drivel spoken about what is such a simple and useful concept. Was it the terribly expressed attempts at analogies that enraged you? Or the mechanism itself? Huge sympathy for the former, bemused by the latter.