skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view


For the past two years AMD has been quietly working on a new series of high-end GPU processors, Radeon Instinct, to be targeted at the artificial intelligence (AI) accelerator market. In recent years AI has been a niche application area for GPUs, but with the continuing success of deep learning significantly based on the availability of dedicated hardware accelerators, and with AI/machine learning increasingly moving from research labs into real-world applications, this is turning into a fast-growing market that is forecast to be worth billions of dollars. No major microprocessor player can afford not to be in this market, which to date has been largely dominated by Nvidia, and with Intel making recent acquisitions to enhance its position for the future, the focus has turned to AMD. With its public statements on Radeon Instinct, with first availability in the second half of the year, AMD is very much part of this race.

AMD is positioning itself as the open alternative in AI accelerators

Clearly not content to be a “me too”, AMD is positioning itself as an open alternative by open sourcing the whole software stack that runs on Radeon Instinct. This software stack plays an important role in interfacing machine learning applications to the hardware accelerators. AMD has also created middleware called MIOpen, for GPU-tuned implementations of standard machine learning routines, including convolution, pooling, activation functions, and normalization and tensor formats. It runs on the Radeon Open Compute platform (ROCm) that interfaces directly with the GPUs. This software stack is integrated with popular deep learning frameworks, including TensorFlow, Torch7, and Caffe.

Opting for open standards, such as ISO C++ and OpenCL, positions AMD as a counter to potential vendor lock-in, such as, for example, in the case of Nvidia’s proprietary CUDA platform. Customers prefer to have choice and avoid being tied to one supplier. However, sometimes vendors are forced to create proprietary solutions when there is no alternative, and this applies to the PCIe link between CPU and GPU, and GPU to GPU. PCIe, a standard largely controlled by Intel, has become a bottleneck that forced Nvidia to create NVlink as a higher bandwidth alternative. AMD has also created its own enhanced link but has not yet provided further details in public.

Radeon Instinct and Ryzen (Zen “Naples”) platform for enhanced GPU and CPU performance

AMD is building a next-generation series of CPUs based on its latest Zen architecture, with first releases branded Ryzen. Zen “Naples” combined with Radeon Instinct GPUs is designed to provide an enhanced platform for AI acceleration, offering low latency and enhanced peer-to-peer communications at lower costs. Workloads are designed for data centers. While conversant with the embedded market needs, AMD has yet to announce AI accelerators for this sector.

According to the information released, the Radeon Instinct series includes three GPUs: MI6 and MI8, which are designed for AI inference applications (running pre-trained neural networks in production use), and MI25, which is based on a new GPU architecture called Vega and is designed for training AI applications as well as inferencing. Specific details on MI25 are yet to be released.

The entry of AMD into the deep learning/AI accelerator market will be welcomed by all prospective customers in the automotive, financial, health, and other sectors, and will add to what is already promising to be a new burgeoning of the space with many new architectures appearing for the first time. This is also good news for all AI researchers.


Further Reading

Trends to Watch 2017: Artificial Intelligence, IT0014_003200 (Jan 2017).

How to Get Started with Artificial Intelligence: A Guide for Enterprises, IT0014-003161 (Nov 2016).

Making artificial intelligence applications safe for humans, IT0022-000801 (Oct 2016).

The next chip arms race will be to power machine learning, IT0022-000725 (June 2016).

DeepMind AlphaGo and general artificial intelligence: are we there yet? IT0022-000653 (March 2016).


Michael Azoff, Principal Analyst, IT Infrastructure Solutions

Recommended Articles


Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316

Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at

Contact marketing -

Already an Ovum client? Login to the Knowledge Center now