The single most important paper for programming language designers to read came out in 2000. It wasn’t written by a computer scientist, mathematician, or physical scientist. It was written by a couple professors studying social psychology:
“When Choice is Demotivating: Can One Desire too Much of a Good Thing?” Iyengar, S. S., & Lepper, M. Journal of Personality and Social Psychology, 79, 995-1006. (2000).
This paper explored the phenomena of “choice overload.” Here is what they did.
They created two displays of gourmet jams. One display had 24 jars. The other had 6. Each display invited people to try the jams and offered them a discount coupon to buy the jam. They alternated these displays in a grocery store and tracked how many people passed the displays, how many people stopped and sampled the jams, and how many subsequently used the offered coupon to buy the jam.
The results were surprising.
- 24 jar display: 60% of the people passing the display sampled the jam, 3% purchased jam.
- 6 jar display: 40% of the people passing the display sampled the jam, 30% purchased jam.
The larger display was better at getting people’s attention. But the number of choices overwhelmed them and they just walked away with out deciding to purchase a jam. In other words, if the goal is to attract consumers, less is more. Too much choice is demotivating.
Admittedly, selecting a gourmet jam is insignificant. Maybe for more important issues, “choice overload” is not relevant? The authors of this paper, however, went on to consider more important choices such as 401K plans, and once again, a clear choice overload effect was found. Choice overload is real. When people are faced with too many choices, the natural tendency is to “not make a choice” and just walk away (probably in frustration).
Why is this relevant to parallel programming?
Think about it. We (that is, computer companies) want to sell hardware. To do that, we need software. We display our platforms and hope software developers will spend their valuable development dollars porting to our platform.
So what is the situation today with multi-core processors? A software vendor walks up to “our display.” We show them our nice hardware with its many cores and we tell them they will need to convert their software so that it will scale. And then we show them the parallel programming environments they can work with: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmemm, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL,Split-C … and the list goes on and on. If we aren’t careful, the result could very well be a “choice overload” experience with softwre vendors running away in frustration.
Think about the impression this glut of choices creates. If we “experts” can’t agree on how to write a parallel program, what makes us believe parallel programming is ready for the masses? In our quest to find that perfect language to make parallel programming easy, we actually harm our agenda and scare away the software developers we need.
We need to spend less time creating new languages and more time making the languages we have work. This is why anytime I hear someone talk about their great new language, I pretty much ignore them. Tell me how to make OpenMP work. Tell me how to fix MPI so it runs with equal efficiency on shared memory and distributed memory systems. Help me figure out how to get pthreads and OpenMP components to work together. Help me understand solution frameworks so high level programmers can create the software they need without becoming parallel algorithm experts. But don’t waste my time with new languages. With hundreds of languages and API’s out there, is anyone really dumb enough to think “yet another one” will fix our parallel programming problems?
[Ed. Note: Tim answers this question, responds to comments below, and says that he might even be "dumb enough," in his follow up blog