NVIDIA explains Pascal’s ‘Lens Matched Shading’ to enhance VR

NVIDIA have taken to the stage to talk about the company’s new Pascal GPU’s and what this means for virtual reality. The new graphics card has been rearchitectured with VR in mind and the result is new features that form part of the GPU that can significantly enhance VR rendering performance, namely Simultaneous Multi-projection and Lens Matched Shading.

Simultaneous Multi-projection, or SMP, allows Pascal based GPU’s to render multiple views from the same origin point with just one geometry calculation, making it substantially faster. Prior to this technology in the new Pascal chip, the same render for multiple views would have required a pass for each projection, but with SMP up to 16 views can be rendered in a single pass.

Traditionally, VR applications have to draw geometry twice — once for the left eye, and once for the right eye. Single Pass Stereo uses the new Simultaneous Multi-Projection architecture of NVIDIA Pascal-based GPUs to draw geometry only once, then simultaneously project both right-eye and left-eye views of the geometry. This allows developers to effectively double the geometric complexity of VR applications, increasing the richness and detail of their virtual world.

This technology can be leveraged into a technique NVIDIA call Lens Matched Shading (LMS); the aim of LMS is to avoid rendering pixels that will eventually end up discarded in the final view having gone through the distortion process.

Lens Matched Shading uses the new Simultaneous Multi-Projection architecture of NVIDIA Pascal-based GPUs to provide substantial performance improvements in pixel shading. The feature improves upon Multi-res Shading by rendering to a surface that more closely approximates the lens corrected image that is output to the headset display. This avoids rendering many pixels that would otherwise be discarded before the image is output to the VR headset.

nvidia-pascal-simultaneous-multi-projection-smp-gtx-1080-680x349

The distortion process ends up cutting out a significant number of pixels from the initial render, ultimately resulting in wasted processing power that was used to render those now unneeded pixels. This processing power could be leveraged elsewhere and LMS aims to predict which pixels aren’t needed and focus the processing power on the ones that are. NVIDIA say that using the LMS technology can result in up to a 50% increase in throughput for pixel shading but to achieve such high results, applications need to be developed with NVIDIA’s VRWorks SDK.

Source: RoadtoVR

Tags:

We will be happy to hear your thoughts

Leave a reply

VR Source
Logo
Compare items
  • Total (0)
Compare
0