The battle for the public cloud has been raised a notch with AWS enhancing its services across its broad portfolio. At the recent AWS re:Invent 2016 conference in Las Vegas, AWS announced a number of new AI services that embed deep learning (DL) as well as providing enhanced GPU support such as elastic GPUs, which can be attached to all EC2 instances, with varying grades of numbers of GPUs consumed. These new AI offerings place AWS at the forefront of providing developers with advanced AI solutions to enhance existing and new applications to benefit from the latest DL technology. Overall the message from re:Invent was that large enterprise accounts are moving to Amazon’s cloud.
Amazon AI services will accelerate the use of DL technology
At the conference, AWS announced a number of new AI services powered by deep learning (DL) neural network technology. AWS announced active support for the MXNet DL framework, an open source project, having hired a number of engineers from Carnegie Mellon University involved in the project. Amazon’s home-grown DL library, DSSTNE, was designed for personalization workloads and its re-focus on MXNet benefits from its computation and memory-efficiency targeting heterogeneous devices from mobile to distributed GPU clusters. MXNet is now used both internally within AWS and its AI service offerings. AWS will further support MXNet with new tools for developers. It will also continue to support a range of leading DL libraries including TensorFlow, Theano, Caffe, and Torch.
AWS offers GPU support for DL with one click, and has created P2, an EC2 instance designed for DL workloads. AWS is also supporting FPGAs, which are seen as offering a degree of flexibility between hardware acceleration of software algorithms on CPUs and GPUs and hardwiring AI systems. FPGAs provide flexibility in hardwiring algorithms. This accelerates their running, but designing and testing these systems is not as rapid as when working with pure software algorithms.
At re:Invent Amazon announced three (more are planned for next year) new Amazon AI services: Amazon Rekognition, Amazon Polly, and Amazon Lex. The services are powered by DL for real-time and batch analysis and are designed to be easy to use and low cost.
Amazon Rekognition offers image recognition and analysis, such as that for facial analysis and categorization, and provides a percentage probability or confidence score where the user can set the confidence threshold to make decisions for recognition. Recognition includes the emotional state of facial expressions, as well as features such as beards, glasses, and so on.
Amazon Polly converts text to life-like speech (in an MP3 audio stream) with the option of 47 voices across 24 languages. It has low latency for real-time response, and has advanced features such as speech markups.
Amazon Lex is a conversation engine also used in Amazon Alexa. For Amazon, Lex represents a third-generation conversation technology, the first is machine-oriented, the second is control-oriented, and the third is intent. So, for example, a person wanting to book a flight has this as the intent, and Lex will engage in a sequence of questions and statements with the mission to complete the intent. In a flight assistant chatbot, the designer will fill out fields that are necessary for the flight (origin, destination, flight date, and so on) and Lex will use this to guide the conversation. The designer only needs to complete a select set of key fields and Lex will automatically expand it with many more related fields. When linked to AWS Lambda, Lex can also automate the purchase of tickets and other related tasks. It can also use Amazon Polly to engage with the user as a voice response.
Amazon’s internal use of AI technology, which it has over a thousand developers working on, include applications such as discovery and search recommendations, fulfillment, and logistics, Alexa and Echo, and it uses AI/machine learning to enhance many other features and products. These internal AI capabilities are what AWS has exposed to its customers as external services.
AWS is attracting large enterprises to the cloud
At AWS re:Invent 2016, Peter Weis, VP and CIO of Matson, a large multibillion dollar enterprise, spoke about why Matson went all in with AWS. It was with another major cloud provider and had tried to work with it but faced a number of obstacles. Foremost there was a culture gap, as because the provider did not have cloud-native in its DNA, and a large part of the provider’s business was still legacy/monolith-based and some its divisions saw cloud as cannibalizing their business. This resulted in cultural and technology friction. Matson had made a commitment to move to cloud-native technologies because it saw this as the future, and its systems had already made the transition to a modular architecture. A key factor in moving to AWS and successfully adopting its cloud-native technologies was therefore Matson’s readiness and suitability to make the transition to microservices and containers. Not least of the advantages was being able to hire grade-A developers who were unlikely to want to continue building monolith systems.
With its new cloud-native database offerings Amazon is aiming to lure large enterprise accounts. Its relationships with vendors it is targeting for the enterprise market, such as Oracle, Salesforce, and SAP, is complex because these very same vendors are also partners. To stay ahead of the other cloud providers, AWS builds its own digital hardware and also runs its own global private network to connect its data centers. Relying on existing equipment providers and the Internet simply does not scale well enough for the huge capacity needs and low-latency requirements of AWS. At the scale AWS requires, building and owning its own equipment reduces costs as well as speeding up maintenance and updates.
“Amazon’s broader database footprint ratchets up the Oracle rivalry”, IT0014-003195, December 2016
How to Get Started with Artificial Intelligence: A Guide for Enterprises, IT0014-003161 (November 2016)
2017 Trends to Watch: Big Data, IT0014-003164 (November 2016)
“Making artificial intelligence applications safe for humans”, IT0022-000801, October 2016
“Nvidia announces the most advanced AI computer in a SOC: Xavier”, IT0022-000800, October 2016.
“The next chip arms race will be to power machine learning”, IT0022-000725, June 2016.
“Nvidia bets on deep learning”, IT0022-000675, April 2016.
“DeepMind AlphaGo and general artificial intelligence: are we there yet?”, IT0022-000653, March 2016.
Michael Azoff, Principal Analyst, IT Infrastructure Solutions