skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view

Summary

Nvidia is targeting hardware and techniques that can deliver the next generation of virtual reality (VR) environments and self-driving cars. At its GPU Tech Conference in San Jose, California, in April 2016, the company announced Iray, its new photorealistic rendering technology for VR, as well as Drive-PX, a new autonomous car platform incorporating artificial intelligence–based deep learning and surround vision to improve the self-driving experience. While Nvidia’s main focus is on deep learning and the accompanying increase in computing horsepower that will require, VR and autonomous cars will also benefit.

Even non-autonomous cars are getting smarter

With its announcements relating to VR and autonomous cars, as well as the launch of the Nvidia DGX-1 deep learning supercomputer, Nvidia is positioning itself as the partner of choice for processor-heavy applications. Nvidia’s keynote speech at its GPU Tech Conference featured photorealistic VR environments of Mount Everest and the surface of Mars, for example – uses that are unlikely to be relevant for most consumers in the short term but that show what the technology could achieve.

For regular consumers, these computer vision capabilities will be more relevant as applied to autonomous car technology, a field that Nvidia entered in 2015 and whose users include OEMs such as Volvo, Ford, and Audi. To that end, Nvidia announced Drive-PX, which it calls the first deep learning–powered car computing algorithm. In addition to autonomous driving, Nvidia’s platform provides the processing architecture to deliver high-definition video and audio interfaces, including improved mapping for navigation, voice-controlled messaging and navigation, and surround vision to see what’s happening in the car’s blind spots. These features are designed for closer integration with Android Auto, meaning users with Android smartphones will benefit from improved hands-free calling, navigation, and mapping.

As autonomous cars become smarter, there will also be implications for the Internet of Things – for instance, using the sensors embedded in autonomous cars to sense and communicate with other autonomous cars in their vicinity, or using Nvidia’s end-to-end mapping capabilities to avoid heavy traffic or locate parking.

Computer vision capabilities can also be relevant to non-autonomous connected cars – auto OEM Tesla offers autopilot mode for several of its models, allowing them to parallel park on command. The cars’ sensors also feed data back to the instrument panel for use in functions like lane departure warning, collision warning, and auto-steer.

Appendix

Further reading

PrimeSense Is the Key to Apple’s Long Game in Augmented Reality, ME0002-000637 (January 2016)

Author

Francesco Radicati, Senior Analyst, Consumer Technology

francesco.radicati@ovum.com

Recommended Articles

;

Have any questions? Speak to a Specialist

Europe, Middle East & Africa team - +44 (0) 207 017 7700


Asia-Pacific team - +61 (0)3 960 16700

US team - +1 646 957 8878

Email us at ClientServices@ovum.com

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Call us at +44 788 597 5160 or email us at pr@ovum.com

Contact marketing - 
marketingdepartment@ovum.com

Already an Ovum client? Login to the Knowledge Center now