Real Time Ray-Tracing: The End of Rasterization?

The title seems rather provocative, but PC Perspective seems to think that this is a definite possibility. But is it…? I’d like to explore the current state-of-the-art in real time ray-tracing, based on what has been shown at last months Intel Developer Forum, where ray-tracing expert Daniel Pohl showed off his port of the Quake IV video game to Intel’s real time ray-tracer. I’ll also explore some future applications of ray-tracing that may make it a very compelling alternative to today’s raster pipeline.

For reference, here is the PC Perspective Article.

For those who don’t already know, here’s a short description of how ray-tracing differs from rasterization:

Imagine a scene in 3D. All the objects are made from geometric structures with the basic building block of a triangle. By stringing together vast chains of triangles, you can build spheres, cylinders, blocks, and just about any other structure. And with the tools available to game artists today, you can use triangles to build very detailed objects, including people.

teas_edited-1.jpg

In the raster pipeline, these triangles go through a number of steps in which each triangle – one at a time – is analyzed, plotted, colored, lighted, textured, and painted on the screen. The end result is a fully realized 3D scene, and today, some very convincing special effects can be added through the use of “shaders”, which are basically special programs written to change the way the render pipeline draws particular pieces of the scene. Today, rasterized video games are everywhere, and almost all of them offload some of the computational work by employing Graphics Processing Units, or GPUs.

Ray-tracing, on the other hand, models a scene in terms of the rays of light that pass from each pixel into the eye of the viewer, rather than on the basis of triangles. The scene still contains many triangles, but this “geometry” is abstracted into data structures that resemble “trees”. In other words, you can travel along the trunk of the tree, onto smaller and smaller branches, until finally arriving at the “leaves”, which allows the overall complexity of the scene to be broken down into simpler and simpler pieces.

This adds a level of efficiency to the rendering mechanism that can make it very efficient. Consider, for example, the performance that Daniel Pohl was able to get in his Quake IV port to the Intel Ray-tracing engine.

While some hard core gamers may consider Quake IV to be a bit dated by today’s standards (and realizing, of course, that game development happens at near the speed of light ;-) ), it was still considered the state-of-the-art in video games a couple of years ago, and required the fastest video cards on the market to render.

However, Daniel’s Quake IV demonstration required no video card interaction from the GPU, and instead only used the video card to send the image to the monitor. This is because Daniel’s demo system had eight x86 cores, a configuration that is destined to become mainstream in a few years. And, because the ray-tracing algorithm scales so well with CPU cores, it doesn’t need the assistance of the GPU in order to get the same performance.

And, if you refer to the PC Perspective article, you will see that Daniel’s game reached almost 100 frames per second at 1024×1024 resolution. Note that as the resolution increases, the computation will spend more time tracing light rays for those additional pixels, and the frame rate will go down. However, we can extrapolate performance at 1080p High Definition resolution (1920×1080 wide screen) by assuming about twice as many pixels on the screen. With twice as many pixels, the frame rate would be nearly cut in half. Even so, ~50 frames per second is practically considered flawless animation. To think that a PC with 8 cores can run a game like Quake IV without the use of a GPU, at high definition resolution and fluid frame rates, is impressive to say the least.

Let me point out that some elements of the game were not running due to the experimental nature of the game engine, such as sound and some special effects, but all of the geometry of the game level loaded, and Daniel was able to traverse the entire level.

At this point, some people will wonder, “Can I run it with only 4 cores on my PC?”

Sure, but at half the speed. In other words, you can get the same fluid 50 frames per second at the previous 1024×1024 resolution, but with only 4 cores. In a few year’s time, 4 core systems may be considered a quaint low end alternative.

Either way, you get the idea. Ray-tracing is a workload that gets near perfect scaling the more cores you add. In fact, we have simulated with up to 16 cores, and we’ve already seen more than 15x scaling. With future platforms and additional optimizations, this may scale even better.

So, besides being able to play games without a GPU, you might be wondering what else ray-tracing can do.

The answer is that ray-tracing enables certain kinds of special effects that are either too complex, too time consuming, or too computational to implement in a rasterized environment. This is because ray-tracing physically models the correct way that light rays travel through a scene, while raster environments simply shade triangles based on approximations derived from vertex, pixel, and texture properties.

To be sure, though, rasterization has come a long way, and with today’s shaders, programmers are enabling many new and convincing special effects. This same level of industry investment has not yet gone into ray-tracing, but because it is modeled in the correct physical way, I expect that it will one day exceed the quality of special effects being done on today’s GPUs.

Take a look, for example, at what Hollywood uses when they make special effects for films and in-game movies. They already use ray-tracing engines, but it takes hours to fully compute each individual frame. These offline computations are very exact and time consuming, but breakthroughs in the way Intel has designed our ray-tracing engine may allow some of these special effects to happen in real time, while playing a video game.

Effects such as reflections, refractions, and shadows render much better in the ray-tracing pipeline. For example, it’s possible to make shadows more realistic, by “softening” them as the surface of the shadow moves farther away from the object casting the shadow. In a ray-tracing environment, the shadow is correctly modeled, but in a raster environment, the shader program needs to take on a lot of additional complexity. If not done correctly, as in many current games, it can often lead to “artifacts”, which is another way of describing pieces of an image that don’t look right. For instance, many games cast shadows with jagged or blocky borders. However, in a ray-tracing environment, they will be rendered physically correct every time without complex shader programming.

Another example are reflections, which can improve in quality by taking on “glossy” elements. If you looked at yourself in the mirror, you would see a perfect reflection. But if you were to look at yourself through a polished wood flooring, you would see a very glossy reflection. The latter are much more computational, and nearly impossible to duplicate in a raster pipeline without blurring the reflection map of the object (note: a reflection map is a a non-physically based approach to do reflections in rasterization). But in a ray-tracer, it’s just a matter of casting more light rays and seeing how they diffuse in a physically accurate modeled system. This may seem like equivalent statements, but I believe that our own human experiences can tell the difference between an object that has been “blurred”, and an object that has been physically modeled to have a realistic glossy appearance. But either way, when we have these effects ready to show, then you (the viewer) can be the judge! :-)

Research is going on today that will enable all of these special effects in real time. And in a few years, CPUs may even have the core counts and capabilities to enable effects such as Global Illumination – long sought as the Holy Grail of real time rendering. We think we can make it happen soon, and anyone who is interested should keep close attention on future Intel Developer Forums, where we intend to keep the public aware of our progress.

Also, please stay tuned for future discussions on this topic, where I will discuss additional reasons why I think ray-tracing will eventually become a viable – and even preferable – alternative to the existing raster pipeline.

[Update (10/15/07): I wanted to add an author's note about the visual aid included with this article. Some people have commented that the above image under-represents what the raster pipeline is capable of delivering in terms of quality. Well, as it happens, this is a true statement. As I also mentioned above, rasterization has come a long way, and today's shader programs are very capable of delivering great special effects, assuming that a programmer takes the time to do them right. As it turns out, the intricacies of why someone would prefer ray-tracing over rasterization are more complex than simply comparing like images, and that's something I am hoping to address in future blogs. While I would like to offer a head to head bake-off, I should point out that it is difficult to make such a comparison today. Between the relative maturity in our ray-tracing engine, compared to the great progress in today's rasterizers, it would hardly leave our research teams with a level playing field. I will be the first to admit that we have a lot of work to do before our real time ray-tracing research becomes an obvious choice for professional artists and game designers. However, since it is research, and this is the Research@Intel blog, I am proud to discuss the progress we have made so far. I do not think that it will take ray-tracing 10+ years to be a viable alternative, and in fact, I think it is much closer than people realize. And when we are ready to show true head to head comparisons, we will make sure the industry knows about it. So stay tuned for future Intel Developer Forums. In the mean time, please check back with our Research@Intel blogs, as I will address more of your concerns in future ray-tracing articles.]

[Ed. note (10/19/07):The discussion continues in Jeffrey's next blog]

44 Responses to Real Time Ray-Tracing: The End of Rasterization?

  1. Christopher P says:

    The image examples are very telling. The ray traced image is photo realistic. I can only imagine the impact this will have on the entertainment and gaming industry. Very cool!

  2. Zee Naushad says:

    Hats off to CPU makers. Just a couple of years ago Dual Core was a big thing. Now we’re talking quad and octa-core :). I just can’t wait to see what the future holds for all of us. It’s good to see that Intel is firmly in the lead at this present moment. Hopefully this breakneck pace will keep on going and we will see more and more.
    By the way, I’m looking forward to seeing how the successor of the Core CPU will perform. I really want it to be the same difference as from Netburst to Core architecture.

  3. Tom says:

    Interesting article, but the picture is really not an honest example of current state of the art rasterization. No reflections? No shadows? This is how it was done at least 4 years ago. Contemporary games are much. Better than that!

  4. Magnus Wendt says:

    On the contrary..
    The rasterized image doesn’t even use the most basic of image enhancing techniques. The lack of shadows make the objects appear completely disconnected from the table, no reflection maps are used etc. Real time raytracing is cool, but these images do not represent a fair comparison…

  5. Ahti says:

    Do not get carried away guys. The pictures above are a blatant attempt at misguiding the reader into thinking that ray tracing is noticeably better than rasterization.
    The video cards of today have a nice thing called shaders, which specifically deal with realistic light and reflections. The newest cards can output 400 – 500 GFLOPS of rendering power, while multicore CPUs (let’s say 8 Core 2 cores @ 3.0GHz) are yet to surpass the 100 GFLOPS mark.
    This is what shading looked like in the year 2002: http://www.shadermark.de/web/pic_9.jpg
    I’m not saying that raytracing is not the ticket to photorealistic rendering, I’m just saying that it’s not the cheapest, the fastest and the only ticket ;)

  6. Eric Bron says:

    it looks like “ray tracing” is too restrictive to describe what you are about, for example you talk about global illumination at the end of your post when in fact diffuse inter-reflections are the exact opposite of classical raytracing that deal merely with perfect reflection & refraction
    anyway, I’m a fervent advocate of pure software realtime renderers, have a look here
    http://www.anandtech.com/IT/showdoc.aspx?i=3091 for an example of *today’s commercial application* where multicores CPU allow realtime rendering of very complex models

  7. Thinus says:

    mmm… Lets start with the comparison picture.
    What engine did they use for the rasterization?
    The effect on the kettle is called enviromapping and can be done since the Geforce 2. That is back in 1999.
    idTech 4 (Doom code on which Quake 4 is based) is 3 years old and a 8 core cpu’s will only hit in 2 years time.
    Sorry, but I am not convinced.
    Considering the amount of detail that todays engines pushes, I dont see this equation working out for Intel. Crysis must probably push 10x the amount of detail than Quake 4 did 2 years ago. By the time 8 core CPU’s are out, games will push another 10x factor of detail.
    I cannot see Intel pushing enough cores to keep up with the game engines.
    Unless they can clean the code or develop a core that is build as a pure ray-tracer, like todays rasterization engines.
    Rasterization I am affraid will be with us for a while.

  8. Marco Salvi says:

    Quite interesting topic, well written article, though accompanied by kind of misleading pictures (it’s not like we can’t properly render reflections and shadows with rasterization).
    Back to the main topic; the end of rasterization is imo not going to happen very soon (if ever), so don’t hold your breath for it.
    It’s true that ray tracing enables us to render ‘correct’ reflections, refractions, soft shadows, ambient occlusion, etc.. but this correctness comes at a huge computational cost. Moreover rasterization is already blazing fast now, and when realtime ray tracing will be a viable option for your average consumer rasterization will be even faster (it’s not like those guys that design GPUs will wait for ray tracing to catch up!).
    At this time I’m not even sure if computing a first hit with ray tracing makes any sense at all, as rasterization is going to be faster at that any time of the day.
    I can certainly foresee a future where we will be able to do real time rendering using rasterization and ray tracing, where the latter makes more sense (incoherent rays..).
    Recently had a conversation with a very smart chap that works on ray tracing research: he reckoned we need to throw at least 10 rays per pixel to render some decent quality images (though I think we need much more rays than that..), which means we need to push over half a bilion rays per second for a 1080p image.
    And this is just for intersection computations, what about shading those pixels as well? RT also requires to mantain/update spatial subvision structures for non static objects, which comes at an additional cost (generally not required by rasterization) too.
    Anyway, this is a really interesting topic, and since I’m not a ray tracing expert I can’t wait to learn in which context ray tracing may emerges as a preferable alternative to rasterization. Keep up the good work, hope to read a new article about this subject soon :)

  9. Thomas says:

    While this development is interesting, the photo quality is noticbly bad on the raster image. It looks like The Sims 2 with horrible image-maps. Perhaps you could do a better comparison? Include full-detail texture maps on the rasterized image and apply those complex things you left out. Visibily, the ray-tracing will be a better quality (in certain areas), but also note that the rasterized image will have full effects. How would this scale with the Intel TeraScale (the 80-core processor)?
    Also, how hard is it to wrote for a ray-tracing game engine, versus a rasterizing engine? Just because ray-tracing is new, doesn’t mean it’ll take a hold on the market. It could end up being that our GPUs are used to accelerate the ray-tracing–100 GFlops v. 500+ GFlops should improve frame rates.

  10. Holger Gruen says:

    I saw Daniels talk at GCDC in Leipzig. He always compares very old rasterized game images with ray-traced images and ignores advances in rasterization algortihms completely. I wonder if he knows about the randomized z-buffer for example. He fails to answer the question about dynamic scenes and how that e.g. scales to a really big number of characters or particles. Also reflections from curved objects destroy your ray and memory coherance badly and you will be memory bound quickly. Ray tracing is already used inside shader in raterized graphics. One way to deal with dynamic scenes is to raterize the polygons into 3d grids. The future is clearly some hybrid solution. Just consider how a revent paper from nvidia about stencil based subsample routing could be used to accelerate ray tracing on GPUs to form hybrid solutions.

  11. Bruce Walter says:

    Wow, I believe that I actually generated those example images and shared them with Intel during a visit sometime around 2003. Don’t get hung up on whether a particular image could have been generated by shaders or not. Photography did not replace painting for creating portraits because it was impossible to paint realistic portraits. Rather that the skill and expertise to create realistic portraits via painting was much higher. Shaders and rasterization can simulate any effect for any particular image given sufficient time and skill of the creator/artist. The difference is that ray-based methods can simulate natural effects far more easily and robustly without requiring any great skill or effort on the part of the creator/artist. That is why ray-tracing will ultimately supplant rasterization for most realistic image generation, though there will always be applications for more stylized representations such as you get with rasterization.

  12. anon says:

    All the critics appear to assume that rasterizing techniques will continue get faster. That may not be so, or they may not improve as much as rendering techniques.
    If less research has been done into rendering techniques, it’s reasonable to suppose rendering will get a greater speed-up as more improvements seem yet to be discovered.
    Also, there is a limit where rasterizing’s complexity ceases to allow further optimization. I suspect that will be hit by it before rendering.
    Finally, if it doesn’t scale, rendering may well overtake it in practical terms if large numbers of cores become the norm.

  13. Dr. Kenneth Noisewater says:

    Even if raytracing is the wave of the future, who’s to say that GPU makers won’t just introduce products to speed that up too?

  14. Sean Koehl says:

    I’ll take some credit/blame for the pic here. I provided it for Jeffrey during editing (it’s from our recent IDF class on raytracing).
    Given, the rasterization image was created using very basic techniques. The point we’re trying to make here, though, is that adding the lighting effects via rasterization takes multiple passes and clever techniques that require a non-trivial number of manhours to develop and produce. Ray tracing can produce this automatically and accurately. This will allow more resources to be shifted to scene designers and artists, and give them more freedom to represent what they want to show. The crossover point where this will become more efficient to do on a CPU than a GPU is not as far away as you might think.

  15. Ben says:

    lol. They should compare what is comparable. For me, it is more like propanganda/marketing stuff, and this kind of practice should not be allowed. Let me explain this. The rasterized image is not making use of advance features available with graphics card.
    Soft shadows have not been generated on the rasterized image, but using methods like shadow mapping would work just fine here. Cube mapping is also possible to produce biased reflections, and there is undergoing research that tries to unbias this.

  16. Marco Salvi says:

    While it’s true that RT can produce reflections and shadows ‘naturally’, it’s not certainly true that to have reflections and shadows with rasterization takes amazingly clever techniques or a lot of work.
    Games have been using shadow maps and cube maps to render reflections for many years now, it’s not really ground breaking technology/rocket science.
    The crossover point won’t come until we can throw some thousand million rays per second. Are you telling me that that day is coming soon? And as I stated before..what about also shading these nice images? GPUs devotes the vast majority of their area to shading, not to rasterization, while it seems to me a that future hardware based ray tracer would need to devote a lot more area to speed up RT than GPUs do to speed up rasterization.
    This is not a ‘war’ between CPUs and GPUs architectures as both are (at least currently) designed to solve different problems, but it’s more about using the right tool for the right task.
    Why would we want to throw rasterization out of the windows? Does it make any sense?

  17. Thomas says:

    Another question—how hard is it to create a ray-tracing scene, versus using a creator program to create a rasterized image? I’m not a programmer, but it seems that once you implement the correct codes and set them up to execute properly, they’ll run smoothly. Also, if this isn’t -that- far off, how will this scale from your 8-core system to my 2-core E2140 system? I’m not buying a new motherboard, processor, and memory to take advantage of the small possibility that a handful of games in 3 years are using ray-tracing.
    How hard would it be to write hardware-acceleration? I could see a RPU (ray-tracing proccessing unit) taking over the position of a GPU, but I would think that it would take at least a decade for this change to really take place. I’d expect a 50/50 split between ray and raster games in about a decade.
    Final thing is do you really think this will take off? We’ve already tried hardware-accelerated physics (Agenia PhysX), but that’s still not caught on. How is nVidia (your partner in this particular area of the industry) going to respond to this? Their SLI 2.0 still hasn’t been launched, and the extra PCI Express slot on SLI motherboards (nForce 650i, 680i) still isn’t being used for an additional graphics card (to accelerate physics). Same deal with ATI.
    Thanks,
    Thomas

  18. Shedletsky says:

    I want to see a video of Quake IV being raytraced in realtime, or it didn’t happen. Seriously, you can’t just make claims like that.
    [Ed: our current agreement with Id prevents us from releasing a video of the demonstration, but you can see still frames from the PC Perspective article, and also on Daniel's website: http://www.q4rt.de/ ]

  19. Paul says:

    I certainly learned more about ray-tracing,
    thanks for the eye-opener about the real-time
    direction and great real-world proof-of-concept work (Daniel Pohl’s Quake IV). Looking forward to future installments on this topic!

  20. Carl Ryder says:

    While this may very well be possible within the next few years, especially with new processor technology and core stacking. I can not see this being adopted in the mainstream for at least 10 years. The main reasons will be cost. Cost for the end user, I would say 5+ years before a great enough percentage of mainstream users possess an 8 core system. Now compare the games of today with the processor capabilities of 5 years ago. They may very well run but it wont be to pretty. That’s the situation Intel shall be finding itself in, in the future. Processing power will have to quadruple rather than double a generation.
    The second problem will be migrating the gaming industry and the gaming population of the world away from an entrenched system. Look how “slowly” the adoption of dx10 software and hardware has been. Slow and with no real benefit over existing dx9 and there wont be for a year or so yet. Now that’s a very simple migration really, for both the end user and the developer. This is not the case with moving from Rasterized to Raytraced.
    Raytracing may very well be the future of gaming and rendering in general, but the future never comes as fast as we think. :)

  21. Alexei Soupikov says:

    Indeed, Rasterization folks did so great job that from looking at the images/tech demos it might appear that effects like shadows and reflections (even refractions :)) are easily achivable on GPUs. For some reason the algorithmic limitations of what people see are not discussed widely in public. The limitations are very strict in reality and affect both gameplay and content creation. Any algorithm expert could easily tell that
    a) shadow maps have inheritent sampling problem causing too many artifacts, any cure to this is a heuristic, so genrally shadow maps are not robust (robust solution is something like IZB, which is a ray tracing algorithm in nature and BTW not the fastest one and it is not handled by GPUs so far)
    b) Reflections on GPUs are fakes working correctly for flat surfaces only. There is an algorithm presented in GPU Gems 3 that handles reflections/refractions better; it is based on multi-pass technique that creates a sort of volumetric representation from sampling geometry and then RAY TRACING that representation. That tecnique suffers from many sampling problems and quickly becomes expensive when scene complexity increases.
    c) There is an example where ray tracing just eye rays only is superior comparing to rasterization on GPU even now. It is the static model with large polygon count as in this setup RT has O(log(N)) complexity when rasterization has O(N).
    d) It looks like for completely dynamic case the algorithmic complexity of RT is also O(N) due to acceleration structure construction step. But don’t forget that in Rasterization case O(N) is multiplied by (avg polygon pixel count)*(shader complexity)
    Well this is just to illustrate that things are not that simple as it might seem and that Rasterization vs RT question should be described in terms of algorithms and numbers (complexity, FLOPS, BW, etc.) rather than propaganda or marketing claims.

  22. Marco Salvi says:

    You are right, no one should use propaganda or marketing claims, but I’m not sure what you wrote makes sense.
    RT algorithmic complexity is not logarithm per se, it is that way only when RT is implemented (implcitely or explicitely, see IZ) spatial subdivision structures.
    If we process with RT an indistinct soup of a bilion triangles how can the algorithm have O(log(N)) complexity?
    The same holds for rasterization, only naive implementations are O(N), and nothing stops you to apply the same strategies to rasterization as well, in fact games do this all the time.
    Moreover modern GPUs employ a lot of tricks to reduce the rendering time even in the linear complexity case (hierarchical z-culling..)
    So again, you’re right, things are not simple, but they are not how you depicted them either :)

  23. Clark Brooks says:

    It has been a long time since I last coded raytracers or rasterizers. But, there is nothing I recall about the raytracing code that makes it particularly more suited to x86 than to special-purpose (GPU) hardware. Although today’s GPUs are not built with ray tracing hardware, it is typical for small loops like this to get over 10x improvement in performance/watt with dedicated functional units.

  24. dentaku says:

    Well, be realistic: as soon as the first ray tracing games come out and show effects that can’t be done with rasterization, hardware manufactures will built ray tracing enhancements. You will probably see the GPUs to slowly convert to ray tracing accelerators. In the end, everything will be ray traced – even the desktop. And as soon, as CPUs will dozens of cores, no GPU is needed anymore. The future is ray tracing – but rather later than sooner …

  25. anon says:

    Ray tracing falls flat with much more than a teapot. For *spaces* with light (e.g. inside a cathedral), you want radiosity rendering. That’s well outside the realm of mass-market shared-memory systems for quite some years yet. And most approximations lead to texturization techniques plus a bit of ray casting… So the “best” architecture for the forseeable future will combine both.

  26. SonK says:

    Well if everything goes as plan, Intel should release Larrabee in 2009, it supposed to have 16 core, it would be great for ray tracing..or maybe some of guys forgot about that?
    An Intel Larrabee GPU (16 core) + a native quad core Nehalem CPU should be enough power to create a commercail game?
    Though by 2009, 8 core nehalem CPU should be standard for gamers. So add an additional four cores to that equation!

  27. Blissex says:

    «I can not see this being adopted in the mainstream for at least 10 years. The main reasons will be cost. Cost for the end user, I would say 5+ years before a great enough percentage of mainstream users possess an 8 core system.»
    Intel have publicly discussed a low cost 16 CPU chip for graphics and physics coprocessing code named “Larrabee”, check Wikipedia etc.
    Nvidia currently make more money on many PCs with their graphics and motherboard chips than Intel does with their CPUs. They are surely going to try to fix that.

  28. Dark_Wr4ith says:

    well at the moment, graphics cards are slowly eliminating the cpu from the graphics processing side, because its time consuming to have to relay all the information accross the mother board, espessially with dx10, with the addition of geometry shaders, the graphics card does not have to wait for the cpu to create the shapes. Thats where the flaw come in here

  29. Alexei Soupikov says:

    The good point made ealier is that RT performance heavily relies on acceleration structures, so RT naturally uses them. It is also true that rasterization needs acceleration structures in case of large scenery. So if acceleration structure is created anyway, then why not just use for RT?
    This is one of the natural migration step.
    I also belive that there going to be hybrid Rast-RT solution. It is going to be more complex and not obvious one rather than just making eye rays with Rast and doing reflections with RT. Here is why. If RT system has decent performance tracing secondary rays, primary (eye) rays are piece of cake taking samll %age of execution time.
    I don’t believe though that it is going to be easily possible to add RT extensions to existing GPUs. GPUs are very wide streaming architectures having enormous BW that can be used efficiently with certain access patterns.
    Creating acceleration structures which is ~50% of execution time for RT of dynamic models is CPU-oriented task, it is not possible to implement on GPUs efficiently.

  30. Adrian says:

    Why does everyone describe the scene model as a collection of triangles?
    Sure, that’s the easiest way to do it in rasterland. But with ray tracing, it’s easy to represent curved surfaces mathematically so that you don’t get polygonal artifacts from approximating them with triangles. Curved surfaces require much less memory to describe, which offsets some of the non-coherence you might suffer using general ray tracing.
    It’s not just about better and simpler shading effects. Ray tracing will bring about the death of the triangle.

  31. cNutt says:

    @SonK
    All you forget about how great job intel done ($$many bucks earned$$) when decide not to integrate memory controller inside a core and you suppose that Nehalem core, which for mainstream/gamer market should be crippled one w/o MC inside, could have EIGHT cores on 45nm. Yeah right. Nehalem should have at least 12MB L2 as Penryn, they hide that indecisively cause they for sure will cut them down whe it’ll have MC inside. In that case even if that shows up as too small (when you have 8 SMT chip) they’ll stuck on how good that is coause further L2 produces more expencive chip. So in fact, here we are with 8 cores virtually offered. Maybe intel guys have that in mind, SMT cores not the real ones of course. And you would double that on 45nm (24MB)? Not even to mention how much power they’ll consume in that work regime. Even on 32nm it enormous an they know that. But then maybe they Bulldoze it in some Fusion technique and gave up gamers that what you said GPU (well it’s still GPU why they need one?)+MC+Nehy.
    Back on track. I must say that those 3d videos are too much fluid for my taste (and i dont mean on mpeg video artifacts). Maybe they didn’t waste too much time on adpting games for RT but it’s poor. Not nearly impressive as that fake rendered image on this blog. We’re long passed beyond sm1.2 or even surface bump mapping of dx7+, so please dont try to sell us another NetFade architected ideas. All i see in q4d is some none impressivly rendered spacestation environment (and 25% of movie it pitch black or almost there) while creature look-alike some strech-squeeze baloon creature created by some kid party clown. Anyway if you start there and produce some more advanced realistic engine/code it could worth a while especially if you pointing on integrated part segment of market. Some kind of low power alternative to today GPU monsters. But somehow it all seems like some cheap, pocket trick. Just to have another alternative and bandwagon of only yours supporters (read: programmers) which seem something like “must have” these days.

  32. Nick says:

    I don’t know why Intel is still using the Quake 4 abomination to show off realtime ray tracing. Quake 4 is one of the ugliest games from the past 7 years. The engine was built specifically to render confined spaces and normal mapped surfaces. When you are going to raytrace a scene from Quake 4, you lose all the normal mapped detail which blatantly reveals the polygon starved environments and low res textures. The raytraced version of Quake 4 looks like a 1997 game. If Intel really wants to promote RT they better make some scenes of their own that can compete with a modern looking game.

  33. Jeffrey Howard says:

    Hi Nick,
    I understand your objection, but often in the world of research, when you demonstrate your technology, you are best served with taking advantage of the “low hanging fruit”. Daniel’s experience with Quake IV made it an obvious choice to port to our ray-tracing engine as an initial proof point for its robustness with (somewhat) modern gaming. There are obvious visual hurdles to overcome, but we were hoping that experienced game designers would look past what you call the “polygon starved environments and low res textures”, and recognize some of the fantastic techniques that ray-tracing enables, such as perfectly rendered shadows, reflections, and lighting effects. Those were really the crux of the demonstration. Going forward, we hope to show even better examples, and also on making our ray-tracing engine more extensible and easy to use. It takes a lot of work, which is why we are not advocating that our engine is ready for gaming use today. It is in fact a research project, but we are very optimistic that it could take root in a few years, given the huge strides we’ve made. As Daniel has found out, Intel has a sizable brain trust of ray-tracing experts, as well as the fastest ray-tracing engine in the industry, by an order of magnitude. So as we work on enabling more features, and get more elaborate demonstrations online, I think the advantages of ray-tracing will become even more obvious.

  34. Callum Brown says:

    Hi all,
    Firstly, great article Mr. Howard. Very informative and interesting, and easy to understand even for someone like myself who is better at playing games than understanding exactly how they came to be.
    I’m just wondering if what’s being implied here is that, in the not-too-distant future, super powerful GPUs may become redundant? Am I right in saying that? If so, how do you think the computer industry will be effected by this excellent technology, with particular reference to the major GPU manufacturers?

  35. Jeffrey Howard says:

    Hi Callum, the answer to your question is pretty complex, and the short answer is that I don’t know how GPUs will evolve in the future, or whether they will be as necessary then as they are today. However, the point of my blog wasn’t to disparage GPUs in general, but to point out that graphics may have an opportunity to evolve beyond current rasterized algorithm. Besides being a benefit to the end-user (something I attempted to illustrate in the blog), ray-tracing is also a benefit to the developer. Raster graphics today is very well established, and can be extensible to include very complex shader programs that deliver effects that can be very convincing. But in the end, these effects are only as good as the artist and the programmer. Shadow maps, for example, have evolved to be very good quality using raster engines, but rendering per-pixel correct shadows by tracing rays of light gets it right, every time, no matter what else is interacting in the scene. It becomes easier for the developer to depend on the physically correct modeling, than for them to program the effects to be convincing. Very brilliant artists and programmers can create very convincing effects, but not all of us can be brilliant artists and programmers. That’s really where ray-tracing comes in. If you continue to keep an eye on Intel’s progress with this technology, you will find some very interesting proof points in time. The technology is still pretty immature, but it’s making great progress. And it may be nearer term than people think.

  36. mantas says:

    Hello there, it was very interesting article. I am coming from 3d animation/film background, so realtime aprouch sounds very crazy and contravercial, i mean its ok for me to render a frame 640×480 in 50hours, well that was couple of yers ago :) anyway very interesting. I will try to use this oportunaty to meat raytracing experts. I had this idea, and i wonder if its possible to do, so what i wonder is, sins while raytracing image we cast primary rays and secondary rays. I just wonder if it would be possible to cast thuse rays not in a straight line, what i mean if “ray” would be curved. here is my idea in a bit more detail,
    http://3dideas.wordpress.com/2008/01/31/hello-world/
    and whith my reasons why i even think of such stuff…
    anyway, best luck whith developement!
    m.

  37. pro_optimizer says:

    As I see it, your article shows a somewhat biased perspective on the rendering problem.
    First off, neither ray tracing nor rasterization is inherently “physically correct”. Which would mean to execute a detailed (molecular or even quantum) simulation
    of the world (and its interaction with your visual system) in order to calculate the visual perception, and then to imitate that using the capabilities
    of the display (i.e. emitting a series of frames etc.). We won’t be able to render images using the above method in the coming one hundred years, if
    computer technology progresses at its current pace. So we have to fake reality as good as we can, which requires lots of tricks and hackery in order
    to run in real time. And contrary to your article, neither ray tracing nor rasterization saves you from that trickery, when you want to get photorealism.
    E.g. even if you had some 16 core processor by 2010 which does 300 million rays per second you won’t be able to render the 2007 computer game Crysis
    in real time. Actually, in order to reproduce only its soft shadows + athmospheric scattering + sky/ambient illumination + depth of field, motion blur and
    anti-aliasing, you’d wait half a day for a single frame using the “it’s just a matter of casting more light rays” approach. So, if you want to bring that
    down to 30 frames per second, there won’t be much left over from the perceived elegance of ray tracing, assuming it’s possible at all. And note that there is
    a ton more effects in modern games (water, caustics, clouds (!), sub-surface scattering, just to name a few), which would multiply the number of hours per
    frame even further. That may be the reason why everyone is looking at the *simple* case of ray tracing (Whitted RT), which consists of hard-edged shadows +
    non-diffuse reflections & non-diffuse refractions (with reflections/refractions on curved bodies being the only thing that pure rasterization cannot do both
    physically accurate and efficiently). But even there is a problem in the reasoning, namely that for all pixels which are not shiny (think forest-/mountain-/water-/city-scapes,
    or typical indoor environments such as your living room), ray tracing is the computationally more complex solution, because it traverses the whole scene acceleration structure
    per pixel, while rasterization does that only once per triangle or even once per object). Also note that hard-edged shadows have been a solved problem for consumer rasterization
    graphics since 2001.

  38. Radnor says:

    For me this was the best read of the day. For now of course. I do disagree with your intel developers for once. The basics concept you are trying to “demonstrate” is a bit flawed in my PoV. You cant make a Generalistic Processor to do the job of a specialized one. It wont pull it.That have been proved over and over in the IT bussiness several times. Im not a hardware techie like most of you guys posting here. Im just saying what i saw, by know-how and experience, that, that kind of aproach never went too far. Even if you try to Muscle it up with a load of cores. Honestly i think you might have a shot in rendering professional applications/workstations. In that point i think the GPU left the Users/Professionals a bit on “rope to dry”. In gaming itself, i seriously dont know. CPU world maybe fast. GPU world (although abeit slow last year) is one of the more agressive and fast paced (i mean with the life of a GPU being really short) in the world. Honestly, i would see this coming more from AMD, with a Low Cost Solution (GPU+North+CPU) that really works. They already did half of it. Lets just add CPU support there. But i mean low cost. I think your aiming too high on this one. Although i would love to see it (and use it), and we can get rid of DirectX.

  39. FireShadow says:

    Real time ray tracing may be good …
    but that comparison image is laughable
    I can write the shaders to render the bottom image in about 20 minutes.

  40. Grover says:

    Raytracing is more simplistic to implement in code. I am surprised at peoples response to this – it shows quite obviously that they have never actually written either a Raytrace renderer or maybe they have never written a traditional poly renderer.
    Its all about state. In a raytracer you handle everything in the scene management with the same state information. Ie the ray cast interacts very uniformly with all objects. With ANY poly renderer you need to have _specific_ poly render passes to determine the appropriateness of the state. For example alpha passes need to be carefully interleaved in a poly renderer. With ray tracing, most realtime raytracing uses a reverse raycasting method, so that 1 ray maps to one pixel. You end up with minimal overdraw as well (something poly renderers never do – most suffer up to 80% overdraw, and I know of some cases that go to 300% overdraw due to the sub pixel sized polys). Additionally ray tracing gives you ‘more information’ about your scene, and you can combine collision and physics systems with relative ease.
    Finally, lets be honest about the use of raytracing. Nearly ALL movie production work with 3D rendering uses raytracing and NOT traditional renderers. The main reason for this is lighting _correctness_. Shaders are a ‘fudge’ to make something that looks close to the proper result, raytracing can mathematically give a far more correct result. And it can do it with a constant single application to the whole scene – coding is FAR easier in raytracers.
    Its only a matter of time before we can realtime raytrace almost any scene. I think people also mistunderstand too, that pixel shaders specifically are a form of raycast (polygon ray cast) and the next evolutionary step is to move up to the raytrace scene rendering. The results speak for themselves.. do a render in Maya or Max with VRay.. they use raytracing.. and I doubt you could make an equivalent renderer in a poly renderer for such quality output :)