skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view


Artificial intelligence (AI) has the potential to help firms stay relevant to their customers. It also contains potentially catastrophic risks. Elon Musk donated $10m to the Future of Life Institute to sponsor research into safe AI and prevention of it reaching a stage where it runs amok and destroys humankind.

Pega's T-Switch provides control over AI

Yet AI in the guise of machine learning provides the key to dynamically orchestrating customer experience and the realization of true omnichannel customer engagement. Dr. Rob Walker, vice president of decision management at Pegasystems, explained the risks associated with AI at PegaWorld 2017. AI is safe when it is transparent, he said. By that he means that AI decisions can be unraveled and explained.

The potential dark side of AI is when the logic of its output cannot be explained and it is attached to unfettered automation. This is opaque AI. Deep learning is opaque AI. It has the potential to generate aberrant outputs if not controlled. Walker cited the Microsoft Twitter experiment with the tweetbot TayTweets. Within a short span of time, TayTweets evolved into a genocide supporter. Microsoft immediately shut it down.

Deep learning uses neural networks and replicates the way neuroscientists believe the human brain works. Each neuron in the network ingests data and performs a calculation. The outputs become inputs to adjacent or successive layers of neurons in the network. The final output feeds back into the neural network, creating the self-learning capability. There could be thousands of neurons arranged and connected across hundreds of layers in the more complex neural networks. This makes deep learning fundamentally opaque.

The outputs can be extraordinarily beneficial, such as the search for personalized cancer treatments. However, the lack of transparency means there is risk attached to automated actions arising from the outputs. In the case of driverless automobiles, aberrant behavior may go unnoticed until it is too late – after a fatal crash.

Pega takes the moral high ground with T-Switch

In business, use of AI, however aberrant, is unlikely to lead to death, but if a TayTweets-like scenario were to emerge from an automated marketing campaign system, the reputational damage could destroy the business. Pegasystems has developed a transparency control, the T-Switch in Pega 7's Customer Decision Hub. This allows marketers and others to turn off opaque AI, or to graduate it on a sliding scale, and review a next best action recommendation before allowing it to be automated. This is a first in customer engagement systems.


Further reading

Pega through the Omnichannel Lens, IT0020-000258 (April 2017)


Jeremy Cox, Principal Analyst, Customer Engagement Practice

Recommended Articles


Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316

Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at

Contact marketing -

Already an Ovum client? Login to the Knowledge Center now