Q4 ITJ: The Velvet Revolution of Multi-core Software

Physics is driving a revolution in software development. For software developers, I’m sure it’s odd to think about it this way but the evolving trends in semiconductor manufacturing is going to have a profound impact in how applications, tools, and software design methodologies take shape in the coming years. In a nutshell, the combination of Moore’s Law scaling and power-efficient architectural designs is leading us down a path that will greatly increase the amount of parallelism visible to the software developer. As a result, what was once the niche discipline of parallel programming is now going mainstream in a huge way. With this transition, however, comes an enormous challenge…

I’ve used the word “niche” glibly…parallel programming has been whatever you’d call that next thing smaller than a niche. Until recently, this means that students graduating form undergraduate programs were unprepared for multi-core programming. A more pressing near-term problem is the relative lack of experienced parallel programmers in industry. The estimates vary depending on whom you ask, but a reasonable estimate is that 1% (yes, that’s 1 in 100) of programmers, at best, have “some” experience with parallel programming. The number that are experienced enough to identify performance pitfalls across a range of parallel programming styles is probably an order of magnitude or two fewer. As we (and our competitors) produce microprocessors with ever increasing core counts, we have to make sure the applications and software is there for the end-user to realize the enormous performance potential of these devices. So, how does this happen?

While the butterfly effect of semiconductor physics driving a revolution in software development has become increasingly obvious, but it won’t be a sudden, violent change. At Intel, we’re pretty lucky in that we count among our ranks a disproportionate number of those software developers with vast amounts of experience in parallelism. This is largely a result of our investments in high-performance computing products and research in the last couple of decades. Our goal is to use this expertise to enable our developer community to make the transition to multi-core painlessly. The latest issue of the Intel Technology Journal describes the varied approaches we’re taking to tackle this problem:

Even the briefest scan of this edition will highlight something we are coming to understand (really, relearn) about parallelism and multi-core: no one size fits all. Just look at the (relatively recent) proliferation of programming languages (java, c#, c, c++, fortran, ruby, python, javascript, perl, erlang, ocaml, haskell, etc.), now multiply that by the space of possible parallel programming idioms. We are committed to providing a range of tools across a range of abstraction levels and parallelism styles that accommodate all present and future programmers. A lot of folks within (and outside of) Intel would like to find the holy grail of programming models that encompasses everything, but that’s really a job for us researchers…and it won’t help programmers in the near term, meaning the next 5-10 years of accelerated parallelism in mainstream parts.

For the developer going multi-core, there is no better place to start than this issue. What you’ll find here is a healthy mix of high-productivity tools available today, cutting edge tools of tomorrow, and applications research that will drive tools research in the next decade. With this excellent suite of articles and the high level of parallel programming expertise at Intel, I’m particularly proud that our paper on Ct was accepted. While Ct was born out of the research labs, an increasing number of developers within and outside Intel are finding compelling value in the notion of an easy-to-use programming tool that not only provides high-performance today, but will scale applications forward to future multi-core tera-scale architectures. The value of forward-scaling multi-core applications is consistent with the one of the key benefits that developing on Intel has given programmers: your code will run (and run well) on future Intel Architectures.

So, read up, enjoy, ask a lot of questions, and get coding!

8 Responses to Q4 ITJ: The Velvet Revolution of Multi-core Software

  1. Lord Volton says:

    It seems to me that when single core chips are discontinued all programmers will be forced by the market to learn parallel programming.

    Since there will be a market reason to learn: all future systems will have multiple (more than one) core. So the faster Intel discontinues single core chips the better.

  2. jagadish says:

    nice and detailed Article.
    the issue is real and sometimes i wonder if we are running faster than the industry and gap in pace with which industry adopts multicore seems slow and journey difficult
    It may leave good opens for our competition to pitch in with some short term solutions to appease the software industry.
    i wish there was bit of reference to MRTE world which is emerging model of deployments.

  3. Blair McBride says:

    I’m a recent Computer Science graduate (from University of Otago, in New Zealand), and have done some post-graduate study. I would have loved being able to dig into parallel programming at university – but was just never given the chance.
    A group of us even tried to get the department to setup something, but were met with fairly strong resistance. Anything else would have been fine, just not parallel programming. And anything relating to parallel programming that is done is done purely as academic research – as soon as the word ‘practical’ is said, the doors were slammed shut in our faces.

  4. Jacob says:

    New Era
    The latest press releases about micronization and fast computing developments are amazing.
    But the question should be: Is it the best way to fulfill the dream of robotics and super computing?
    In my opinion, the most important way of developing computers for humanity, stands in watching and learning from the best inventor: nature.
    Brain circuits and nerve system are working on analog scale which means endless digital base.
    We did not reach so high technology, yet, but a good step forward will be the move to higher level than the binary used today: base 4, base 8 or more, while the recommended one is base 10 (decimal) that is the most natural choice for human beings.
    Ten digital lines (or chip legs) are needed to set 1,000 decimal code, but only 3 lines in decimal (base 10) presentation.
    As a nerve line sending to the brain pain, stress or heat, so can a decimal based line bring more information than in existing binary mode.
    The computing ability and the amount of information processed in the new type of CPU will grow immensely.
    Even the evolution will be step by step (i.e. first to base 4 and later to higher levels) with a communication protocol allowing understanding of each other, so that the cleaver will know to “speak” and cooperate with older systems.
    The optical and optical-fiber computing and communication are ready to work on higher levels (10 colors in base 10 – decimal system).
    New high density memory methods & devices will be developed and many other new coming features are ahead.
    The dreamer and the developer are invited to come through this portal: the inventors of a new generation of computers.
    Jacob Davidi
    POB 1304
    Kfar Saba 44113 ISRAEL
    Mail : davidi1304@gmail.com

  5. Alex says:

    I did 8 years post doc work in parallel programming at the Queen Mary College Centre for Paralel computing- Finally it looks like this is beginning to interest people now the technology has left the UK, where it was invented, and been taken up by the US.
    I think there will be a lot of work at system level to make the parallelism user transparent.
    In parallel computing data routing not data structure is the vital thing but users don’t want to have to think about things like perfect shuffle routing, crinkling, deadlock etc so much of this will be hidden even from the majority of developers.
    That was the superficial response, now to go away and think a little more carefully how to reinvent myself as a parallel computing expert.

  6. Howard Young says:

    Multiprocessor architectures are decades old but have been phased out by lower-power, faster single processors. Thus, the knowledge base for programming these beasts have not been adapted or practiced by the new software development generation. It’s not hard to see why only 1% of the developers do not have applicable experience.