Sunday, October 31, 2010

Concurrency as Basis for Scalable Parallelism

David Barbour's Concurrency as basis for Scalable Parallelism may change the way you think about programming:

It has been observed that many 'concurrent' applications fail to scale beyond a few parallel components. In many cases, some sort of data-parallelism is feasible (e.g. parmap, SIMD, GPU) and we should certainly be able to leverage those opportunities! But I'd like to address just the concurrency aspect - the coordination of diverse systems and interests - and argue that even that is a sufficient basis for scalable parallelism, assuming we leverage it properly.

The scalability of concurrency as a basis for parallelism comes in realizing that the number of relationships in a system can rise O(N^2) with the number of components. Thus, we need to embed most significant computation into the relationships, rather than the components. ...

When most computation is moved to the relationships, the resources used by a service will scale commensurately with the number of clients it directly services - and vice versa; resources used by a client will be commensurate with the number of services it directly uses.
It changed my thinking: from "ouch, this is a lot of data to process" to "wow, lots of opportunities for concurrency, and thus parallelism".

1 comment:

  1. I think the idea of thinking in terms of connections and not nodes is a very powerful one. However, I would add that one of the reasons that we seem to have so much trouble with parallelism is that we're still thinking sequential with concurrency as an afterthought, rather than the other way. I think we need to start structuring programs as concurrent processes from the ground up, with sequentiality as special case of that.

    ReplyDelete

Real names (or handles), please. Anonymous comments are likely to be ignored.