The title seems rather provocative, but PC Perspective seems to think that this is a definite possibility. But is it…? I’d like to explore the current state-of-the-art in real time ray-tracing, based on what has been shown at last months Intel Developer Forum, where ray-tracing expert Daniel Pohl showed off his port of the Quake IV video game to Intel’s real time ray-tracer. I’ll also explore some future applications of ray-tracing that may make it a very compelling alternative to today’s raster pipeline.For reference, here is the PC Perspective Article. For those who don’t already know, here’s a short description of how ray-tracing differs from rasterization: Imagine a scene in 3D. All the objects are made from geometric structures with the basic building block of a triangle. By stringing together vast chains of triangles, you can build spheres, cylinders, blocks, and just about any other structure. And with the tools available to game artists today, you can use triangles to build very detailed objects, including people. In the raster pipeline, these triangles go through a number of steps in which each triangle – one at a time – is analyzed, plotted, colored, lighted, textured, and painted on the screen. The end result is a fully realized 3D scene, and today, some very convincing special effects can be added through the use of “shaders”, which are basically special programs written to change the way the render pipeline draws particular pieces of the scene. Today, rasterized video games are everywhere, and almost all of them offload some of the computational work by employing Graphics Processing Units, or GPUs. Ray-tracing, on the other hand, models a scene in terms of the rays of light that pass from each pixel into the eye of the viewer, rather than on the basis of triangles. The scene still contains many triangles, but this “geometry” is abstracted into data structures that resemble “trees”. In other words, you can travel along the trunk of the tree, onto smaller and smaller branches, until finally arriving at the “leaves”, which allows the overall complexity of the scene to be broken down into simpler and simpler pieces. This adds a level of efficiency to the rendering mechanism that can make it very efficient. Consider, for example, the performance that Daniel Pohl was able to get in his Quake IV port to the Intel Ray-tracing engine. While some hard core gamers may consider Quake IV to be a bit dated by today’s standards (and realizing, of course, that game development happens at near the speed of light ), it was still considered the state-of-the-art in video games a couple of years ago, and required the fastest video cards on the market to render. However, Daniel’s Quake IV demonstration required no video card interaction from the GPU, and instead only used the video card to send the image to the monitor. This is because Daniel’s demo system had eight x86 cores, a configuration that is destined to become mainstream in a few years. And, because the ray-tracing algorithm scales so well with CPU cores, it doesn’t need the assistance of the GPU in order to get the same performance. And, if you refer to the PC Perspective article, you will see that Daniel’s game reached almost 100 frames per second at 1024×1024 resolution. Note that as the resolution increases, the computation will spend more time tracing light rays for those additional pixels, and the frame rate will go down. However, we can extrapolate performance at 1080p High Definition resolution (1920×1080 wide screen) by assuming about twice as many pixels on the screen. With twice as many pixels, the frame rate would be nearly cut in half. Even so, ~50 frames per second is practically considered flawless animation. To think that a PC with 8 cores can run a game like Quake IV without the use of a GPU, at high definition resolution and fluid frame rates, is impressive to say the least. Let me point out that some elements of the game were not running due to the experimental nature of the game engine, such as sound and some special effects, but all of the geometry of the game level loaded, and Daniel was able to traverse the entire level. At this point, some people will wonder, “Can I run it with only 4 cores on my PC?” Sure, but at half the speed. In other words, you can get the same fluid 50 frames per second at the previous 1024×1024 resolution, but with only 4 cores. In a few year’s time, 4 core systems may be considered a quaint low end alternative. Either way, you get the idea. Ray-tracing is a workload that gets near perfect scaling the more cores you add. In fact, we have simulated with up to 16 cores, and we’ve already seen more than 15x scaling. With future platforms and additional optimizations, this may scale even better. So, besides being able to play games without a GPU, you might be wondering what else ray-tracing can do. The answer is that ray-tracing enables certain kinds of special effects that are either too complex, too time consuming, or too computational to implement in a rasterized environment. This is because ray-tracing physically models the correct way that light rays travel through a scene, while raster environments simply shade triangles based on approximations derived from vertex, pixel, and texture properties. To be sure, though, rasterization has come a long way, and with today’s shaders, programmers are enabling many new and convincing special effects. This same level of industry investment has not yet gone into ray-tracing, but because it is modeled in the correct physical way, I expect that it will one day exceed the quality of special effects being done on today’s GPUs. Take a look, for example, at what Hollywood uses when they make special effects for films and in-game movies. They already use ray-tracing engines, but it takes hours to fully compute each individual frame. These offline computations are very exact and time consuming, but breakthroughs in the way Intel has designed our ray-tracing engine may allow some of these special effects to happen in real time, while playing a video game. Effects such as reflections, refractions, and shadows render much better in the ray-tracing pipeline. For example, it’s possible to make shadows more realistic, by “softening” them as the surface of the shadow moves farther away from the object casting the shadow. In a ray-tracing environment, the shadow is correctly modeled, but in a raster environment, the shader program needs to take on a lot of additional complexity. If not done correctly, as in many current games, it can often lead to “artifacts”, which is another way of describing pieces of an image that don’t look right. For instance, many games cast shadows with jagged or blocky borders. However, in a ray-tracing environment, they will be rendered physically correct every time without complex shader programming. Another example are reflections, which can improve in quality by taking on “glossy” elements. If you looked at yourself in the mirror, you would see a perfect reflection. But if you were to look at yourself through a polished wood flooring, you would see a very glossy reflection. The latter are much more computational, and nearly impossible to duplicate in a raster pipeline without blurring the reflection map of the object (note: a reflection map is a a non-physically based approach to do reflections in rasterization). But in a ray-tracer, it’s just a matter of casting more light rays and seeing how they diffuse in a physically accurate modeled system. This may seem like equivalent statements, but I believe that our own human experiences can tell the difference between an object that has been “blurred”, and an object that has been physically modeled to have a realistic glossy appearance. But either way, when we have these effects ready to show, then you (the viewer) can be the judge! Research is going on today that will enable all of these special effects in real time. And in a few years, CPUs may even have the core counts and capabilities to enable effects such as Global Illumination – long sought as the Holy Grail of real time rendering. We think we can make it happen soon, and anyone who is interested should keep close attention on future Intel Developer Forums, where we intend to keep the public aware of our progress. Also, please stay tuned for future discussions on this topic, where I will discuss additional reasons why I think ray-tracing will eventually become a viable – and even preferable – alternative to the existing raster pipeline. [Update (10/15/07): I wanted to add an author's note about the visual aid included with this article. Some people have commented that the above image under-represents what the raster pipeline is capable of delivering in terms of quality. Well, as it happens, this is a true statement. As I also mentioned above, rasterization has come a long way, and today's shader programs are very capable of delivering great special effects, assuming that a programmer takes the time to do them right. As it turns out, the intricacies of why someone would prefer ray-tracing over rasterization are more complex than simply comparing like images, and that's something I am hoping to address in future blogs. While I would like to offer a head to head bake-off, I should point out that it is difficult to make such a comparison today. Between the relative maturity in our ray-tracing engine, compared to the great progress in today's rasterizers, it would hardly leave our research teams with a level playing field. I will be the first to admit that we have a lot of work to do before our real time ray-tracing research becomes an obvious choice for professional artists and game designers. However, since it is research, and this is the Research@Intel blog, I am proud to discuss the progress we have made so far. I do not think that it will take ray-tracing 10+ years to be a viable alternative, and in fact, I think it is much closer than people realize. And when we are ready to show true head to head comparisons, we will make sure the industry knows about it. So stay tuned for future Intel Developer Forums. In the mean time, please check back with our Research@Intel blogs, as I will address more of your concerns in future ray-tracing articles.] [Ed. note (10/19/07):The discussion continues in Jeffrey's next blog]
Connect With Us
- gta on What makes a super computer become a super computer?
- gk-edv on The Internet of Things will overtake you only if you let it
- Negin Owliaei on The Internet of Things will overtake you only if you let it
- website packages on Ask the Expert: The Internet of Things
- Jordan on Ask the Expert: The Internet of Things
Tags#IntelR&Dday @idf08 Big Data circuits Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless