More on the Future of Ray-Tracing – from Alesh Jancarik

After our first ray-tracing article, we received numerous comments from consumers and graphics experts alike. One of these mavens (who happens to be an Intel employee from another group, and also an active SIGGRAPH participant) surprised us with an interesting perspective, and we decided to publish it as a separate article. As always, the Intel @ Research blog values the comments made by our readers. We hope to create a rich discussion forum where research topics can be discussed by industry experts and consumers alike. So please join in and contribute!

Alesh Jancarik wrote the following in response to our ray-tracing blog:

This article is interesting.

However this article misses some points which are going on in industry.

I hope these comments will help explain why the good work our ray-tracing group is doing is very important.

Ray-tracing is NOT used to produce all special effects.

Ray-tracing and global illumination are becoming more popular with special effects houses, but neither will ever be used for all the frames. The reason is very simple. The thing which artist demand the most is control. Ray-tracing produces photo-realistic images. However, nobody has yet created a ray-tracer which allows the artist much control. The classic example is in a ray-traced scene in which the ray-tracer produces realistic shadows. But the art director says “remove this shadow.” Something a ray-tracers have a hard time with. Many special effects houses including Industrial Light and Magic(ILM) and Pixar use the Renderman software. Renderman does have both ray-tracing and global illumination, however many scenes still don’t use this. Renderman uses the “REYES” algorithm which does some ray-tracing, but is not a ray-tracer.

When using Renderman lighting is done with the Renderman shading language. Renderman shaders can be compiled to real-time shaders (GPU Gems, Chapter 33, pg 551). Also both ILM and Pixar have developed systems for preview of Renderman scenes using the GPU. Recently at SIGGRAPH 07 there was a publication on the system called “Lightspeed.”

Note that this system is not used for final output and still uses the CPU for much of the work. Final output is still Renderman running on the CPU. However the points are:

– Many special effects are created without ray-tracing.

– It’s possible to get close to movie quality graphics using GPU.

– Using the GPU it’s possible to preview the final output much faster than on the CPU.

– For ray-tracing to be used in movie production it’s necessary to combine shader languages with ray-tracing. This is still a research area.

The problem with this article is people assume that if/when ray-tracing replaces the z-buffer it will be the end of the GPU. This is not necessarily the case.

Nvida is working on ray-tracing too.

David Kirk, Nvidia’s chief scientist was a panelist on a panel called “When Will Ray-Tracing Replace Rasterization” at SIGGRAPH 02. There he said,

“I’ll be interested in discussing a bigger question, though:

‘When will hardware graphics pipelines become sufficiently

programmable to efficiently implement ray tracing and other

global illumination techniques?’. I believe that the answer is

now, and more so from now on! As GPUs become increasingly

programmable, the variety of algorithms that can be mapped onto

the computing substrate of a GPU becomes ever broader. As part

of this quest, I routinely ask artists and programmers at movie

and special effects studios what features and flexibility they will

need to do their rendering on GPUs, and they say that they could

never render on hardware! What do they use now: crayons?

Actually, they use hardware now, in the form of programmable

general-purpose CPUs. I believe that the future convergence of

realistic and real-time rendering lies in highly programmable

special-purpose GPUs.”

– David Kirk, Nvidia.

This was five years ago!

A reprint of this article can be found on

Interestingly at the same conference Tim Purcell presented the first GPU based ray-tracer.

One year latter, Tim Purcell also published a paper on GPU based photon-mapping.

Nvidia already has a photo-realistic renderer that does ray-tracing and uses the GPU to speed it up. It’s a hybrid CPU/GPU ray-tracer, and it’s already a product called “Gelato.”

Take a look at G80 architecture which is the basis for entire GeForce 8xxx line and Tesla.

The G80 still has hardware dedicated to implementing the z-buffer algorithm.

However it’s also very programmable and was designed for general purpose programming. The G80 is actually a collection of multi-processors. Each multi-processor is similar to what we call a “core”. There are 16 of them in the GeForce 8800 GTX. Using Nvidia’s CUDA programming language its possible to program the Nvidia G80 using standard C. You can even use Microsoft visual studio just as you would to write CPU programs. One of the difficult problems of the future will be making use of large SIMD machines. CUDA simplifies this by making SIMD programming look similar to programming scalar machines. So actually, programming the GPU is easier than programming the CPU in some ways. Although I have to point out that a good understanding of GPU architecture is necessary if you want to get performance from the GPU. However, the point is the G80 is a far better machine for ray-tracing than the GPU Tim Purcell implemented ray-tracing on.

Nvidia knows ray-tracing will replace the z-buffer. This is why they are going into GPGPU computing with products like Tesla and CUDA.

Ray-tracing will replace the z-buffer for many rendering applications. If you don’t believe me just drive down the street and ask Nvidia’s chief scientist David Kirk. Is this a news article or a history lesson?

But don’t expect the GPU to die when this happens. When the ray-tracing replaces the z-buffer Nvidia will be there with a GPU based ray-tracer. We are in a race. Who will replace z-buffer first? Will it be Intel/CPU/Derivatives or Nvidia/GPU? That’s the real question!

This is why the good work our ray-tracing team is very important.

5 Responses to More on the Future of Ray-Tracing – from Alesh Jancarik

  1. J.Chinniah says:

    Curved triangles are natural primitives for modelling computer images. Unfortunately ray-curved triangle intersection needs itertative algorithm which is time consuming. What if new algorithms are availabe for curved triangle-ray intersections. It will revolutionize ray tracing and raster graphics. Today such algorithm has been developed which uses explicit very fast intersection technique similar to explicit sphere-ray intersection computation. Quadratic,cubic and quartic triangles can be used using this algorithm. Interested people can contact me.

  2. J.Chinniah says:

    Update on my previous comment.
    I added the quadratic curved triangle primitive to Jacco Bikker’s ray tracing code (part2 of series). No optimization was added except that the current curved triangle-ray intersection algorithm is one-step explicit method without any numerical iteration. I traced a sphere with two light sources. Jacco’s program includes shadows and reflections.One picture uses the original sphere primitive used by Jacco Bikker, the next picture uses 16 quadratic triangles with the current algorithm for the same sphere and the third picture shows just two curved triangles at the top half the sphere and eight triangles at the bottom . The first two pictures look like the same. I couldn’t not add those pictures here since this blog accepts only text. I can send those pictures to interested people. My contact address is

  3. L. Frontino says:

    To Mr. Howard:
    You didn’t mention AMD/ATI. Does it mean they have no future?
    Will raytracing GPUs be capable of rendering the actual 3D engines?
    Will DirectX still be needed?
    Thanks for your article.

  4. Jordan says:

    I’m not sure telling a raytracer that a certain object shouldn’t cast a shadow is particularly difficult. I remember fiddling around with trueSpace years ago and it had the option to turn off shadow casting for any given object. Just set a flag on those triangles so rays traveling to lights don’t check for intersection against them. You could do this for rays that have been reflected as well, to give objects with no reflections.

  5. Arno says:

    To get rid of unwanted shadows and to get photo realistic effects for things like water and other materials which have refractive qualities different from glass I would think that the obvious solution would be to allow the event of a ray hitting an object to not just trigger a reflective transform, but maybe whole sets of transforms.
    Then you could have one object that reflects light 100% refracts light from one source as if it was water, colors the refracted light blue, colors the reflected light yellow, brightens refracted light from light source A, dims reflected light from light source B, lets light from light source C go on as if nothing ever happened.
    Sounds kinda trippy. But it sounds possible to me? I mean a leaf must reflect light as a normal object would, but it must also let some light through after coloring it green. Of course you can make the leaf a really murky green liquid, but then you would be able to see through them to some degree I think. The refractive qualities of water bends some light down to the bottom and some rays re-enters the atmosphere depending the angle of entry and the shape of the wave.