What would you do with 80 cores?

When talking to folks about tera-scale computing research or the 80-core research chip, the question inevitably arises as to what general users would really be able to do with “supercomputer-level” performance in a desktop, let alone a mobile device. And it’s a fair question. Gaming aside, if I think about the apps I use day to day at work — office apps, a 5 year old photo editing program, web browsers, etc — it’s hard to imagine needing much more performance than I have.

Of course, I’m falling into the common habit of assuming that the future is simply a linear projection of what we have today, when we know that technology moves in leaps an bounds. The GUI was such a leap in the 80s, and the Internet in the 90s. I beleive we’re in the middle of such a leap right now in terms of mobility, and I’d say more but I’d have to talk at length about the iPhone, which 10,000 bloggers before me have already covered. (Though I did finally try one this weekend. Nice…..)

The leap I’m looking forward to is what we call in the labs “model-based computing.” It means giving computers real analytical capabilities and the ability to accurately represent reality. Rather than blog on about it, here’s an animation to show what I mean (click here for the high quality flash version or the video window below).

So, I know this sounds a lot like AI. And we’ve heard about AI for decades now with limited results. So, why would it be different now? The answer is multi-core. Like the human brain itself, these intelligent apps lend themselves to parallel processing. 80-cores would change the game in a broader sense than just hardware — it will enable new application possiblities.

Model-based computing is a broad topic. For us, it covers things from financial modeling to sorting photos to virtual reality. The point is, we are doing the architecture research to enable a new leap in technology capability. We are looking into some specific applications ourselves, but in the end we want to enable the apps that you want.

So, what would you do with 80 cores?

-Sean

14 Responses to What would you do with 80 cores?

  1. Hello Sean,
    You mentioned AI – that’s really a task for a multicore computer. Artificial intelligence is the mirror of human intelligence. The thing is that you can’t describe it using only one language – this knowledge is passive like books until you start to read them. It needs two at least in order the knowledge starts to be an active tool in creating the new meaning as a paradigma of human nature. And it can’t be done without involvement of an expert. Language is created by people, not machines. Rule-based machine translation is just a unnesessary waste of time until the expert takes participation in the process of translation by selecting the right meanings of words and combining them into the phrases with all possible translations. Only after this processing the machine can apply grammar rules by recognizing which phrases are combined and which meanings are selected by the expert for postprocessing of the sentence. The goal is to describe as many meanings of words and translations of phrases and give the machine a simple binary algorithm – while preprocessing it assembles the sentence first from the general meanings of words and phrases when there are other optional meanings. The more ready phrases are described the more accurate pretranslation the expert would be getting. So, the database that mirrors two languages is a real thing – no more than several dozens of gigabytes.
    Michael

  2. I work with evolutionary computation, which is definitely a killer app for massively parallel processors.
    So what would I do with 80 cores? Evolve software.

  3. Lord Volton says:

    Simulated reality will require all the cores you can throw at it. In fact, the latest Z Brush photoreal models are already ahead of today’s chips by at least a few years.
    The very best photoreal models have millions of polygons and are very difficult to differentiate from actual humans.
    Here is a link:
    http://dev.highend3d.com/gallery/characters/3d_photoreal/Richard3-762.html
    Just getting this model to move across the screen would probably require vast amounts of resources. And when you add in realistic facial movements (an area that is far, far from perfect) and then fluid based physics you start running out of resources fast.
    For an interesting discussion of simulated reality visit:
    http://en.wikipedia.org/wiki/Simulated_reality

  4. Jim Fedchin says:

    Back when the Intel 80286 was the top-of-the-line desktop processor and the installed base of desktop image compression software was limited; most of the commercial efforts in compression were focused on JPEG, even though it had yet to become standard. However, at least one company was promoting a method of image compression based upon fractal mathematics. From a number of perspectives, using fractal mathematics was intriguing as a technological basis for the high-resolution, distributed document management system I was working on at the time. File sizes are small, it is loss-less compression, and resolution is virtually limitless. Characters barely visible on paper A-size sheets because they were reduced from E-size engineering drawings were clearly legible when zoomed on a monitor from files compressed using fractal mathematics. All that resolution from tiny files, which placed small burdens on the network. Sadly, the Intel 80286, even with the help of an 80287 math coprocessor, was not up to the task of decompressing fractal images with acceptable response time. JPEG took off, MPEG followed, and desktop CPUs still can’t decompress images using fractals fast enough for stills, let alone a version of fractal compression for full motion video. But, I have never lost hope that the CPUs would someday exist to make that practical.
    Affordable, teraflop desktops sounds like it could make possible full motion video from files compressed using fractals. Imagine what it would mean to video-on-demand if high definition movies files that currently just squeeze onto a 50GB BlueRay disk were less, possibly a lot less, than 1GB in size. Affordable teraflop processors teamed with fractal compression technology could be a more affordable way to deliver these movies than upgrading the network. When can we expect to see our first hundred million of these processors?

  5. Personally, the thought of 80 cores make drool, but I’m not convinced that the link to A.I. is as clear as you make it to be. Realistically, computational bottlenecks are definitely a problem in A.I., but they are not the show stoppers. Frankly, we still have extremely poor knowledge representation methods, and most of our algorithms are too domain specific. As much as I like to think that 80 cores will give us better A.I., I think the real improvements should and will come from other areas of research.

  6. Jim Craig says:

    How about curing every known disease including aging by building in silico biological laboratories. We will have many petaflops of horsepower at our fingertips within a decade. Future physics engines for the molecular dynamics combined with genetic algorithms and machine learning will allow us to fully simulate biological function and engineer better proteins. Loot at what is going on in the protein folding space. What I propose might seem like sci-fi but it is right around the corner. The last hurdle is complexity as many of the challenges are NP-complete but don’t discount human ingenuity when it comes to doing the impossible :)

  7. Dan Frederiksen says:

    AI is a software issue. if the software was there the hardware for it could be made with ease. but when we still have concepts like a military I wont develop that software for you (although it might well already exist somewhere).
    The 80 cores can be used for the existing applications of offline 3D rendering, physics simulation (both scientific and gaming) as well as the emerging game world complexity applications of which non player character ability could be a modest growing part. from pseudo AI and maybe into something real which would certainly require ever increasing computational power.
    a somewhat unintelligent but generally appealing demo application could be a chess computer. a laptop beating deep blue will no doubt sell even if it has no real merit.
    a more intelligent demo application that could grow into general demand for polycore could be a cpu based dynamic GI lighting solution to feed the more shallow GPU in live grahics. the next step in gaming graphics towards photorealism will be that need for live global illumination and something practical can probably be done with 40-80 cores. and push demand for far more cores

  8. Smokinn says:

    This may be more of a server context but basically what I’d do first is swap work’s server with my desktop and write some monitoring software to check out what’s going on so I could simply alt-tab into it and check out how the cpu intensive tasks are going. Since I have so many cores, dedicating one to watching all the others wouldn’t be a big deal.

  9. With all that processor power one could have the computer perform all the time-taking tasks that the user has in the past performed so that should the user wish to perform them they have already been done.
    My master’s thesis was about a bus for a multiprocessor system. After analysing various different ways of organising communication between the processors, usually with some sort of hand-shaking, it seemed best (and I believe that this may have subsequently become a standard) to shuffle memory around between processors whether needed or not, in anticipation of the need.
    Are multiprocessors themselves used in an “anticipatory way” yet?
    If one uses multiple processors in a conventional way, waiting for a task and then attempting to perform that taskt in parrallel, it becomes difficult to split the task into ‘parallelisable’ sub-tasks. This approach is to see time as a single line and then to attempt to make it shorter by parallelising the line. But the problem is that there is often a critical path, so the advantages of parallelisation are low.
    If one sees the timeline as a tree, and uses multiprocessing in an attempt to processes in advance as many of the branches of the tree (as is popular among philosophers of time these days) prior to their ever having been chosen, then the problem shifts between ‘how to parallise a single task’ to ‘how to predict what tasks may be selected.’ The latter is perhaps more tractable to artificial intelligence.
    I think that perhaps human brains are always doing this sort of anticipatory processing. It is not that we are very fast at reacting to things, or very fast at solving problems, but that we have already solved the problems, in parallel, before we are called upon to solve them.
    If you use this idea, I would love it if you quoted me.
    Tim Takemoto

  10. Leslie Smith says:

    The human brain is indeed extremely parallel: and this applies to virtually all animal (not just mammalian) brains. Much of the processing ongoing is “greedy”: the sensory processing that you are conscious of is just the tip of an iceberg of features etc. being continuously computed, whether actually required or not. (And I’d note here that artificial sensory processing (vision, audition, somatosensory processing, …) is one huge unsolved problem, and one worthy of more research.) Cheap parallel processing means that when data is urgently needed, it’s already there – an important factor for fast reactions (=survival in an animal context). What multicore machines mean is that we can take the same attitude to sensory processing in artificial systems. (The difficulty then becomes one of deciding which element to actually utilise for the specific tasks at hand – but that’s another story for another blog.)

  11. And has multi-cores for different sensory tasks as it was predicted by Immanuel Kant – the multimodality of open mind. As for the Matrix – neural interactive simulation. It works – *exactly* for the transformation between the two worlds (real and imaginary in that movie) or two languages for real.

  12. Tech Talk Terascale: TG Daily chats with Intel CTO Justin Rattner
    “And I think in the previous era, the rule-based IA era, that stuff just proved to be impossible because nobody could write enough rules. But, machine learning gives you the power to train by experience. Google had this recent success on language translation, and that’s a perfect example. And the amazing thing was it had no idea it was translating languages! It was just looking at the patterns.”
    Right on. And these ‘unique’ patterns can be assembled into a pretranslation. And the database containing them is called “Translation Matrix”. That’s what I’ve been doing for five years already. Example-based machine translation on a terminology level.