skip to main content
Close Icon

In order to deliver a personalized, responsive service and to improve the site, we remember and store information about how you use it. This is done using simple text files called cookies which sit on your computer. By continuing to use this site and access its features, you are consenting to our use of cookies. To find out more about the way Informa uses cookies please go to our Cookie Policy page.

Global Search Configuration

Ovum view

Summary

Intel announced its acquisition of Nervana Systems on August 9, 2016. Ovum recently wrote an On the Radar assessment on Nervana: we recognized the company as offering an advanced platform for deep learning, the area of artificial intelligence that has made huge strides in recent years. Nervana offers an advanced deep learning library on the Nervana Cloud and has been developing the Nervana Engine, a custom-designed application-specific integrated circuit (ASIC) microprocessor that is optimized for deep learning and the Nervana Platform. The Nervana Engine was planned for release in early 2017. For Intel, the acquisition allows it to leapfrog its current capabilities for supporting deep learning, which have not to date approached the capabilities of the leading player in this market, Nvidia, and its Pascal GPU processor and DGX-1 Deep Learning System. The competition between Intel and Nvidia will intensify for a share of the artificial intelligence market, which is predicted to grow significantly in the years ahead.

Hardware accelerators are vital in making deep learning possible

While Intel continues to dominate the CPU market with its chips, the current generation being the Intel Xeon, it developed the Xeon Phi co-processors to support numerical intensive processing. Xeon Phi underpins Intel supercomputers, but it has not shined in running deep learning systems, whether for inference applications (deployment of trained neural networks) or for the greater computational needs of training neural networks. Deep neural networks are the success story in artificial intelligence today and have even greater computational needs than the "traditional" neural networks of a decade ago. It was discovered around 2011 that porting deep learning systems onto general-purpose computing on graphical processing units (GPUs) resulted in massive acceleration, and this was a hardware chip market that Nvidia made its own with its top-end GPUs.

Ovum predicted that Intel and AMD had not lost sight of the huge potential market for powering machine learning applications and would re-enter the race with Nvidia at some point. For Intel, the Nervana acquisition is its bid to re-enter this race, and we expect the synergies between Intel and Nervana technologies to yield impressive results. Nervana had ambitions for the Nervana Engine, an advanced ASIC that would significantly outstrip current processor capabilities for deep learning, and with Intel now powering this venture, it promises to be interesting when this chip is finally released.

Appendix

Further Reading

"The next chip arms race will be to power machine learning," IT0022-000725 (June 2016)

On the Radar: Nervana offers the means to design, develop, and deploy deep learning models, IT0022-000708 (June 2016)

Author

Michael Azoff, Principal Analyst, IT Infrastructure Solutions

michael.azoff@ovum.com

Have any questions? Speak to a Specialist

Europe, Middle East & Africa team - +44 (0) 207 017 7700


Asia-Pacific team - +61 (0)3 960 16700

US team - +1 646 957 8878

+44 (0) 207 551 9047 - Operational from 09.00 - 17.00 UK time

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - +44 (0) 207 017 7760 or email us at pr@ovum.com

Contact marketing - marketingdepartment@ovum.com

Already an Ovum client? Login to the Knowledge Center now