A New Law For Programming Languages?

I recently had a debate with a colleague about whether we should be investigating new programming languages for parallel computing given all the languages that have been developed in the last few decades. In the course of this, I made the following arguments (paraphrased):

“In the last 14 years, we’ve seen modest to (very) aggressive commercial adoption of python, ruby, java, javascript, C#, HLSL, Cg, Erlang, OCaml. That’s just what I can think of off the top of my head. That’s a new language about ever 18 months. Can you think of a computer-related field of endeavor that has similar fluidity in commercial adoption? Moreover, many of the ideas promoted in “failed” (from a commercial perspective) languages end up in mainstream languages…just go look at next gen C++ draft specification if you don’t believe that.

Not to mention that, in research, a proliferation of ideas and technologies is considered the sign of a vibrant and healthy community.”

I’ll make a stronger argument here: we have been living with relatively primitive languages because the art of programming language design is still in its infancy. What we are seeing in modern languages are early evolutionary branches that are vehicles for technology development essential to languages that are designed for parallelism (and safety, reusability, fault-tolerance, adaptivity, etc.). What might seem to the C or C++ developer an esoteric field might profoundly affect how she is programming in two years.

For example, type theory work oftentimes seems consumed with comprehending or modeling existing language features in mainstream programming languages. To an outsider (like myself), this might occasionally seem like minutiae. However, understanding the theoretical foundations of widely used language features is important to understanding the semantics and corner cases associated with such constructs (where such understanding in the mainstream is usually ad-hoc). With parallelism, this will be profoundly important in the coming years.

Why are we stuck with these little streams of language development instead of a convergence of these features in mainstream languages? The sequential execution model set the bar very low in terms of language design…far too low to expect opportunistic development of sustainable languages for productive parallel programming. So, for parallelism, we’ve ended up feeling kind of unsatisfied. But we’re about to turn the corner on parallelism…

Given the convergence(s) of these technologies and parallel programming: rather than seeing a slowing or sustained new-language-every-18-months rule, we might see a significant acceleration in this rule. If you broaden “language” to include extensions, libraries and frameworks, it certainly has. So, perhaps it’s too early to declare a definitive pattern or software analogy to Moore’s law.

To put it simply (and plagiarize one of my favorite songs from the 80s): Sequential programming technologies were rivers; parallel programming is the sea.

6 Responses to A New Law For Programming Languages?

  1. Mark Hoemmen says:

    I think something more useful than proliferating parallel _languages_, is proliferating the means to generate languages: good runtimes / just-in-time bytecode compilers, and making it easier to write domain-specific languages and to extend the syntax of existing languages.
    The lack of such tools is perhaps why it took so long to get advanced programming language ideas into the mainstream: writing a compiler is really hard and takes a long time, and writing a good runtime is almost as hard.

  2. I don’t disagree with you. In fact, I think this is happening and will only accelerate the proliferation of useful tools (aka languages of various flavors and purposes).
    When I get back to talking about our work on Ct, one of the side benefits of how we’ve architected things is that we can use the components in various other settings. Hopefully, I can get back to discussing that soon:)
    Bill, Python is an interesting case…there are so many implementations (though cpython seems to be the most widely used).

  3. Thanks for the pointer…I’ve started watching it already (I just wish it was downloadable to ipod like videos.google.com). It’s very interesting and in some ways similar to what we’ve done with Ct: we have this fairly general runtime and compilation framework for fine-grained (dataflow) parallelism and we’re basically exposing a subset declaratively through Ct.
    This will be really useful when I talk more about the Ct implementation.