skip to main content
Close Icon We use cookies to improve your website experience.  To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy.  By continuing to use the website, you consent to our use of cookies.
Global Search Configuration

Straight Talk Technology

Ovum view

Lady Justice, the blindfolded statue bearing scales found outside many courthouses, is the personification of the Roman goddess of justice, Iustitia. The principles it represents are simple: the blindfold (a later addition) represents impartiality, so that justice is dispensed regardless of status, while the scales are free-hanging as the weight of evidence may change over the course of the trial. What this statue represents appears to me an excellent metaphor for how machine learning, and other artificial intelligence (AI) capabilities, should be forged.

Bias can be found in data as well as people

The topic of bias in machine learning, and ethics in AI more generally, is a hot one – and rightly so. Simply put, as a growing number of tasks are automated with these technologies, the number of those tasks that may directly impact a human grows. Obvious examples include screening of resumes for a job opening, or deciding which customer receives what offer in an e-commerce setting. The potential problem arises from the data science effort used to create the algorithms that are the engines of those capabilities. If, for example, the data scientist has either conscious or unconscious bias toward or against a particular group of people, it may guide their model development, leading to an algorithm that has that bias baked in. It may also be the case that algorithmic bias is even harder to detect, because it is hidden away in historical data.

As the data scientist works through the creation and testing of a model, they use data to power that effort. That data is historical, a record of what has happened in the past; it is the product of human activity and can therefore bring with it a record of their biases. In the example of resume screening, it may be that the historical data is representative of a bias against a group of people – again, either a result of conscious effort or coincidental. If this is the data used to teach the model, then the model may well embed that bias and express it as it is deployed into the real world. Internet-scale data sources represent another risk by blending huge data sets that, while anonymous on their own, can create sensitive data when combined; for example, they may infer personal information from connecting web-browsing data.

Back to Lady Justice, my suggestion is this: think of the model as the statue, it requires a blindfold to ensure that its decisions are taken without regard for status, and scales that weigh the evidence on its own merits. In practical terms this means denying machine learning data that should have no bearing on the decision in hand – for example, age, ethnicity, gender, and sexuality. Bear in mind, it may not be as simple as removing fields from existing data; the risk of combining large data sets that may infer such sensitive data remains. It also requires that everyone – not just data scientists – with an interest in that capability should be involved in deciding what the pertinent data-based evidence is, highlighting the importance of business context when it comes to data science. The ethics of AI is an emerging area of growing importance as we automate an ever-expanding amount of work. Being aware of the inherent risks and ensuring that these capabilities are not built in isolation from the rest of the business – and its values – is a vital first step.

Straight Talk is a weekly briefing from the desk of the Chief Research Officer. To receive this newsletter by email, please contact us.

Recommended Articles

;

Have any questions? Speak to a Specialist

Europe, Middle East & Africa team: +44 7771 980316


Asia-Pacific team: +61 (0)3 960 16700

US team: +1 212-652-5335

Email us at ClientServices@ovum.com

You can also contact your named/allocated Client Services Executive using their direct dial.
PR enquiries - Email us at pr@ovum.com

Contact marketing - 
marketingdepartment@ovum.com

Already an Ovum client? Login to the Knowledge Center now