Another IDF has started and we are excited to show our latest progress. Since previous demos we enhanced our cloud-based setup that was using four Knights Ferry cards as the (Intel MIC) as the “cloud” to now run Wolfenstein: Ray Traced at even eight cards in a single machine. In order to utilize the huge amount of horse power we are now running our demo for the first time in 1080p.
As additional eye-candy we included several post processing special effects (thanks to Ben Segovia). Just to clarify: those are not specific to ray tracing and have been seen in some games already. They are operating on the pixels of the rendered image (not on the 3D scene) – in our case directly on the Knights Ferry card. They can improve the perception of the rendered scene dramatically.
- Depth of Field: The effect is well known to photographers. If we want the spectator to focus on a certain area in the picture then the less relevant parts can be blurred. Therefore the object of interest is still sharp and will attract the main attention.
Depth of field on/off (3% performance difference)
- HDR Bloom: If in reality we leave from a dark room into the bright outside our eyes are adjusting over a few seconds to the new brightness. The same can also be observed with digital (video) cameras that mimic this behavior and adjust the brightness spectrum to a pleasantly looking image. While doing this cameras might produce a bloom that can also “bleed” into other objects.
Overbright scene with HDR on/off (2% performance difference)
- Inter-lens reflections: While camera manufacturers are trying to avoid lens flares computer games and movies are often adding them as an artistic element. In this implementation several smaller sized version of the image, shifted to a specific color (e.g. green, blue and orange) will be blended into the original image.
- Subtle (image-based) inter-lens reflections (0.1% performance difference)
Another step we are doing for the first time is a smart way of anti-aliasing (thanks to Ingo Wald and Ben Segovia). There are different possibilities on how to do anti-aliasing. Most of them work pretty much brute-force and therefore invest additional calculations in areas where the improvement might not be noticeable. Our implementation will be applied after the image has been rendered. As ray tracing easily allows to just shoot a few rays for refinement we are analyzing each pixel depending on two factors if it requires more anti-aliasing:
- The angle of the polygon that got hit at that pixel
- The polygon mesh ID of that object
If there is a high enough variation in the angle or a different mesh ID is found we will shoot 16 more rays (supersampling) for that specific pixel and average the resulting color into that pixel. (Please note that the difference can be seen best in the full-sized images that appear after clicking the thumbnails.)
Courtyard view: Smart Anti-Aliasing off
Courtyard view: Smart Anti-Aliasing on (59% performance difference)
Close-up on cable: Smart Anti-Aliasing off
Close-up on cable: Smart Anti-Aliasing on (32% performance difference)
For future implementations more criteria like the color of the pixel (e.g. imagine an almost black spot in the picture – aliasing will not be noticeable here) or the color between neighboring pixels could be added. Further before comparing those colors MLAA could be used to reduce aliasing first and then later only certain areas could be refined through shooting new rays. Tweaking the numbers of additional rays to 4 or 8 might lead to better performance/quality tradeoffs.
We would be happy to show you the real-time demo at IDF at the Intel Labs pavilion, booth 5080 in the exhibition hall.
Additional thanks to Ram Nalla, Nathaniel Hitchborn, Sven Woop, Alexey Soupikov, Alexander Reshetov.
The system that can hold the eight double-sized PCI-Express cards has been provided by Colfax International to us. Thanks!