Graphcore, a Bristol, UK-based start-up, went live today, announcing a successful and oversubscribed series A venture funding round of more than $30m, a record in the UK. The names now backing the new venture have stellar backgrounds, with key funders linked to high-technology firms, such as Bosch and Samsung, and VC firms with a history in successful ventures (Apple and Google to name two). This means that Graphcore is able to call on first-class expertise and a deep network in the high-technology industry to help steer the company as it prepares to launch its flagship product in 2017, an artificial intelligence (AI) accelerator microprocessor to be known as Intelligent Processing Unit (IPU). The company location in Bristol is not by chance, CEO Nigel Toon and CTO Simon Knowles have a background in custom processors, and many of their team go back to the Inmos Transputer that was based in Bristol. The AI accelerator market, particularly for training and inference on deep neural networks, is forecast to grow from its current million-dollar bracket to become a billion-dollar industry in the next three to five years. The success of general-purpose processing GPUs for deep learning neural networks has made Nvidia the dominant player in AI acceleration, and it is now the turn of new hardware innovations dedicated to AI to up the game, something Graphcore promises to deliver. The competition in the accelerator market bodes well for the AI systems designers, and will lead to the next round of innovation.
The deep learning accelerator will be a competitive market in 2017
Graphcore’s key differentiator is that its technology will offer not only data parallelism but also instruction-level parallelism. This will boost the performance of massively parallel networks in the way in which they handle training and more efficient inference on data, and also in how the algorithms are processed at the neuron layers. The company name includes the word “graph” and the mathematical concept of a graph is a network of vertices linked by edges. Data parallelism boosts activity on the edges while instruction-level parallelism boosts activity at the vertices (neurons). This twin parallelism approach will differentiate Graphcore from Nvidia, the dominant player in the market with its GPU-based accelerator that mainly provides data parallelism.
With machine learning systems leaving the research laboratories and entering real-world enterprise applications, products, and services, the market for AI accelerators is predicted to grow rapidly. The three largest application areas to immediately benefit from AI migration will be the automotive and healthcare industries, and any vertical industry that requires intelligent automation to mine big data.
The end game is for inference accelerator processors to be inexpensive enough to be embedded in everyday products, creating a mass market for this type of product. AI accelerators will be used in a combination of modes, including running on the cloud with IOT connectivity and running training algorithms in real-time learning scenarios, as well as pure inference systems that can also be regularly updated over the air: when one robot learns to ride a bicycle all robots (by that manufacturer) can ride a bicycle.
Michael Azoff, Principal Analyst, IT Infrastructure Solutions