skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Ovum view


Across the broad spectrum of AI usage, its deployment will both reduce and extend different parts of the workload for enterprise risk managers and practitioners.

Examples of positive, efficiency-gaining AI-related outcomes for risk professionals include technology being applied to provide early warning in operational systems. Here, analyzing threats in real time and prompting human response or triggering automated action could prevent negative events from causing loss or damage. Machine and deep learning can also help analyze an operation’s characteristics over time and provide input for more effective and efficient risk management. A specific example would be contract analysis using natural language processing techniques, with the objectives of improving the quality of individual agreements and providing an overview of the contract-related risks across all an organization’s contractual relationships.

However, organizational risk practices will have to consider a number of new areas of risk that can arise from AI implementations, and find ways to counter or mitigate specific instances. General categories of new AI-related risk include bias, overconfidence in AI, cyberattacks, and legal and reputational risks.

Even though an AI engine may be impartial, existing biases can be reinforced in two major ways: via humans introducing their own elements of bias as part of helping AI to understand what is relevant and what is not while the AI models are being “trained”; and via the data the AI consumes incorporating behavior patterns of the past rather than being a neutral source of all possible behaviors and their outcomes. Poorly trained AI may not make the right connections to correctly identify risks or issues.

The might be overconfidence in AI and unwarranted assumptions in AI’s ability to provide “true” insights from incomplete or poor quality data, faulty training, or defective programming. Relying on AI as the sole source of risk insight poses the same kind of risk as any narrowly focused approach to risk management.

AI systems have direct access to large quantities of corporate data. They will be a natural and increasing target for cyberattacks, so built-in security must be to a high standard.

Data privacy regulation is still catching up with AI, and there is little existing precedent to guide appropriate use, particularly with the advent of GDPR. Any system that is biased, error-prone, or subject to attack constitutes a reputational risk to the organization that operates it, and indeed a risk of regulatory non-compliance if the system is not managed appropriately.

Without doubt, attractive forecasts of financial gains will feature in many business cases for investment in AI. However, Ovum strongly advises all organizations to ensure that the potential impact of any AI-related risks is featured alongside the benefits, and that accountability is defined.



Alan Rodger, Senior Analyst, Infrastructure Solutions

Recommended Articles


Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316

Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at

Contact marketing -

Already an Ovum client? Login to the Knowledge Center now