After our first ray-tracing article, we received numerous comments from consumers and graphics experts alike. One of these mavens (who happens to be an Intel employee from another group, and also an active SIGGRAPH participant) surprised us with an interesting perspective, and we decided to publish it as a separate article. As always, the Intel @ Research blog values the comments made by our readers. We hope to create a rich discussion forum where research topics can be discussed by industry experts and consumers alike. So please join in and contribute!Alesh Jancarik wrote the following in response to our ray-tracing blog: This article is interesting. However this article misses some points which are going on in industry. I hope these comments will help explain why the good work our ray-tracing group is doing is very important. Ray-tracing is NOT used to produce all special effects. Ray-tracing and global illumination are becoming more popular with special effects houses, but neither will ever be used for all the frames. The reason is very simple. The thing which artist demand the most is control. Ray-tracing produces photo-realistic images. However, nobody has yet created a ray-tracer which allows the artist much control. The classic example is in a ray-traced scene in which the ray-tracer produces realistic shadows. But the art director says “remove this shadow.” Something a ray-tracers have a hard time with. Many special effects houses including Industrial Light and Magic(ILM) and Pixar use the Renderman software. Renderman does have both ray-tracing and global illumination, however many scenes still don’t use this. Renderman uses the “REYES” algorithm which does some ray-tracing, but is not a ray-tracer. When using Renderman lighting is done with the Renderman shading language. Renderman shaders can be compiled to real-time shaders (GPU Gems, Chapter 33, pg 551). Also both ILM and Pixar have developed systems for preview of Renderman scenes using the GPU. Recently at SIGGRAPH 07 there was a publication on the system called “Lightspeed.” Note that this system is not used for final output and still uses the CPU for much of the work. Final output is still Renderman running on the CPU. However the points are: - Many special effects are created without ray-tracing. - It’s possible to get close to movie quality graphics using GPU. - Using the GPU it’s possible to preview the final output much faster than on the CPU. - For ray-tracing to be used in movie production it’s necessary to combine shader languages with ray-tracing. This is still a research area. The problem with this article is people assume that if/when ray-tracing replaces the z-buffer it will be the end of the GPU. This is not necessarily the case. Nvida is working on ray-tracing too. David Kirk, Nvidia’s chief scientist was a panelist on a panel called “When Will Ray-Tracing Replace Rasterization” at SIGGRAPH 02. There he said,
“I’ll be interested in discussing a bigger question, though: ‘When will hardware graphics pipelines become sufficiently programmable to efficiently implement ray tracing and other global illumination techniques?’. I believe that the answer is now, and more so from now on! As GPUs become increasingly programmable, the variety of algorithms that can be mapped onto the computing substrate of a GPU becomes ever broader. As part of this quest, I routinely ask artists and programmers at movie and special effects studios what features and flexibility they will need to do their rendering on GPUs, and they say that they could never render on hardware! What do they use now: crayons? Actually, they use hardware now, in the form of programmable general-purpose CPUs. I believe that the future convergence of realistic and real-time rendering lies in highly programmable special-purpose GPUs.” - David Kirk, Nvidia.This was five years ago! A reprint of this article can be found on siggrpah.org. Interestingly at the same conference Tim Purcell presented the first GPU based ray-tracer. One year latter, Tim Purcell also published a paper on GPU based photon-mapping. Nvidia already has a photo-realistic renderer that does ray-tracing and uses the GPU to speed it up. It’s a hybrid CPU/GPU ray-tracer, and it’s already a product called “Gelato.” Take a look at G80 architecture which is the basis for entire GeForce 8xxx line and Tesla. The G80 still has hardware dedicated to implementing the z-buffer algorithm. However it’s also very programmable and was designed for general purpose programming. The G80 is actually a collection of multi-processors. Each multi-processor is similar to what we call a “core”. There are 16 of them in the GeForce 8800 GTX. Using Nvidia’s CUDA programming language its possible to program the Nvidia G80 using standard C. You can even use Microsoft visual studio just as you would to write CPU programs. One of the difficult problems of the future will be making use of large SIMD machines. CUDA simplifies this by making SIMD programming look similar to programming scalar machines. So actually, programming the GPU is easier than programming the CPU in some ways. Although I have to point out that a good understanding of GPU architecture is necessary if you want to get performance from the GPU. However, the point is the G80 is a far better machine for ray-tracing than the GPU Tim Purcell implemented ray-tracing on. Nvidia knows ray-tracing will replace the z-buffer. This is why they are going into GPGPU computing with products like Tesla and CUDA. Ray-tracing will replace the z-buffer for many rendering applications. If you don’t believe me just drive down the street and ask Nvidia’s chief scientist David Kirk. Is this a news article or a history lesson? But don’t expect the GPU to die when this happens. When the ray-tracing replaces the z-buffer Nvidia will be there with a GPU based ray-tracer. We are in a race. Who will replace z-buffer first? Will it be Intel/CPU/Derivatives or Nvidia/GPU? That’s the real question! This is why the good work our ray-tracing team is very important.