I have been pleased by the attention my blog on choice overload has received. I must admit, I overstated things a bit just to get people talking … and on that count, I clearly succeeded.In my present blog, I’d like to respond to comments on my choice overload blog .. and then discuss how dumb I might be. To start with, its important to note that this parallel programming problem is not an Intel problem. Every major vendor of general purpose CPUs has fully committed to the multi-core path. And hence, all of us are in the same boat … if our processors are to be of any value as the core counts climb, we need parallel workloads. This isn’t whining; it’s reality, and the entire industry must adapt. Also, I can’t stress enough that my “choice overload” blog was not merely theoretical musings. It is based it on experience. Parallel computing is not new. I’ve been involved with parallel computing since the mid 80’s. I have been directly involved with serious attempts to establish 4 new parallel languages and peripherally involved with several others. Some (such as OpenMP) have met with some level of success. Most have failed. I’ve worked with software engineers to help them transition to these new languages. I’ve shown them the clear superiority and ease of use of these new parallel languages; and then watched these new languages go down in flames. I’m telling you … based on my experience living through the early decades of the parallel computing era, choice overload is real and we as an industry ignore it at our peril. At the same time, I need to be clear on who exactly is doing the choosing. I spoke of the “masses of programmers” without being clear on who these “masses” were. My focus is on the countless hordes of professional software engineers who create the software we use on our computers. These are the people motivated by hitting deadlines, shipping new features, and earning their living with their software. I am much less concerned with computer scientists doing research in languages or parallel computing. They are interested in research agendas, exploring new territory, and fundamentally changing the direction of computing. For them, new languages are fine. It’s what they like to do for fun, and we need them to keep doing this research. I also am less concerned with “bleeding edge” HPC programmers. They are very important since they are the pioneers in parallel computing. They have an important role to play in helping to educate the new generation of parallel programmers. But they also like to whine about parallel programming languages when MPI gives them just about everything they need. But more importantly, they usually work from source code and have a host of options real software engineers don’t have. With this context in mind, think of my central point about choice overload. Software engineers selecting a language are faced with choices. Numerous studies have shown that choice overload is a general feature of being a human being. Hence, until proven otherwise, I assume software engineers are human … and hence are subject to choice overload. I really appreciate the comments several people made building on the nature of choice and taking things to the next level of detail. Reading between the lines and pulling together several conversations I’ve had since my blog came out, the crucial point is that the number of options should be small when a choice is being made. If the initial set of choices is huge, but the consumer is guided through well organized sets of options so the final choice is between a few items, then choice overload can be managed. As someone pointed out, automobile manufactures have become masters at this technique. There is no reason we can’t do something similar for parallel programming languages. I want to come back to the question of new languages and the role they can play in addressing the parallel programming problem. I have been accused of being against new languages. Just to be absolutely clear, research on new languages is fine. In fact, we need it. But when you move from research to deployment and reach out to the huddled masses of overworked software engineers, that’s when you really need to be careful. Because choice overload is real. Language researchers need to appreciate that much of what passes itself as new is really a “been there, done that” situation. Data parallel languages, guarded horn clauses, functional languages … come on, can anyone be so ignorant to miss the fact that all of these have been tried in the past. And they largely failed (as measured by adoption by software engineers producing commercial products). I’m not saying new languages can’t work, but before wasting anyone’s time with these old solutions, you had better figure out what has changed that will make them work this time around. One person mentioned the importance of research on runtimes. I was pleased to read that comment. This has been a constant mantra in my work at Intel. We have several parallel languages each with their own runtimes. And these runtimes, of course, are ignorant of each other. The result is that it can be difficult if not impossible to build programs from libraries written with multiple languages or even multiple instance of the same language. And if you look at how modern applications are created, they are build from many disparate modules usually not available as source code (hence recompilation is not an options). We need a massive research effort in the parallel programming community to resolve this issue and define a common runtime infrastructure that can be used across parallel languages. This is a far more important problem that new parallel programming languages or transactional memory or most parallel programming research I see going on around me. Much of my frustration with new languages is because the parallel programming community is largely ignoring these huge runtime issues. I understand why … runtimes aren’t “sexy.” It just sounds better when bragging about our work to friends and family to describe the new programming languages we’re creating rather than a runtime environment. But in terms of impact, the runtime is where we need to be focused right now. But I have gone on way too long … and long blogs are boring. I want to return to my opening question: “Is anyone dumb enough to think we can solve the parallel programming problem with a new languages”. Well I might be that dumb. In my warning about choice overload, I was urging us to deploy fewer languages. I was asking that we spend more time figuring out how to make current languages actually work instead of creating new ones. But over time, I truly hope new languages will emerge. But history has shown us how they will emerge … and intelligent researchers working on new languages had better pay attention to these lessons. 1. Successful new languages build on existing languages and where possible, support legacy software. C++ grew our of C. java grew out of C++. To the programmer, they are all one continuous family of C languages. So I urge language designers to develop their new abstractions and then deploy them as extensions of existing languages. This is why I follow IBMs X10 project so closely. By building on Java, they’ve decreased the “new language gradient”. TBB is another example of “getting this right” by building off common practice in generic programming with C++. 2. Successful new languages emerge from a “pull model” with the pull coming from applications communities; not by a push model driven by computer scientists. Java rose to dominance because internet programmers found it solved many of their problems. “Ruby on Rails” pushed Ruby onto center stage because programmers of web applications found it so useful. Erlang might just take off since it was created by telecommunications users for telecommunications applications. Throughout history, the most successful programming technologies succeeded due to pull from an applications community. Exceptions to this rule exist, but they are rare. 3. Building on point 2, this suggests that the place to look for new languages are in specific application domains. Domain specific languages may be our best hope. And I mean “domain” in the classic sense of an “application domain”, not an algorithm domain (i.e. data parallel languages are NOT domain specific languages). So yes … I might just be dumb enough to think a new language can help us solve the parallel programming problem. But only if we do it right … and that means we avoid scaring off the software developers.
Connect With Us
- gta on What makes a super computer become a super computer?
- Profilebaker on Meet the “New” Makers: They Love Electronics, but Aren’t Necessarily Techies
- gk-edv on The Internet of Things will overtake you only if you let it
- Negin Owliaei on The Internet of Things will overtake you only if you let it
- website packages on Ask the Expert: The Internet of Things
Tags#IntelR&Dday @idf08 Big Data circuits Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless