An artificial intelligence that dislikes women or people of color sounds like a concept straight out of a Twilight Zone episode. But sadly, it’s reality. How did we get to this situation? And is it possible to build an AI that won’t reflect the bone-deep prejudices that are – knowingly or unknowingly – built into our social systems? To answer that second question, it’s crucial to address the first one.

How a Sexist AI Happens

Okay, the first point to make is that sexist or racist AI doesn’t emerge from nowhere. Instead, it reflects the prejudices already deeply held within both society at large, and the tech industry specifically. Don’t believe us about sexism in tech? One study from earlier this year found that 57 out of 58 major U.S. cites paid women in tech less than men. Last year, two female tech cofounders demonstrated tech sexism at work by proving they could make better connections once they invented a fictional male cofounder. And as long as tech companies continue overlooking sexism, they’ll keep perpetuating a system that prioritizes male applicants and promotes male staff.

Sexist AIs Start With a Blinkered Industry…

The tech world loves rapid growth above all else. But this year, it’s finally begun to come to terms with the impact that its culture can make, and a sense of responsibility is finally taking root. Few sum it up better than former Reddit product head Dan McComas, whose recent New York Magazine interview (titled ‘I Fundamentally Believe That My Time at Reddit Made the World a Worse Place’) includes this insight:

…And Machine Learning Perpetuates Them

It’s this attitude that’s at the core of prejudiced AI, which perpetuates the system just as clearly, if a little more mechanically. As Lin Classon, director of public cloud strategy at Ensono, puts it, the process of machine learning is the issue. Ben Dolmar, director of software development at the Nerdery, backs her up. Machine learning begins by providing a model with a core data set. The model trains on this before producing its own outputs. Any historical issues in the core data are then reproduced. Translation? Sexist data turns into sexist outputs. “It’s not unlike painting with watercolors, where the brush has to be clean or it taints the colors,” says Classon. And, in modern society, sexism turns up everywhere, Classon says, whether it’s in “recruiting, loan applications or stock photos.” Or even in the emptiness of women’s restrooms at major tech conferences, as Classon has pointed out to Tech.Co before.

How to Combat AI Prejudice

How do we solve a problem like AI prejudice? Classon boils it down to a key guiding principle: conscientious and collective vigilance. And that begins with ensuring the community behind the AI developments is equipped to spot the issues. Which takes us right back to the core problem of ensuring that a diverse developer community is in place, to find issues faster and address them more quickly. Practically speaking, Classon has further suggestions:

Increased Transparency

Right now, machine learning algorithms function like black boxes: Data goes in, trained models come out. In other words, the AI itself should need to explain its decision, opening up a conversation about how it got there.

Routine Audits

It’s not just a tech problem: Innovative AI companies are responsible for auditing their work and developing standards that can improve the entire industry. One example is Microsoft’s multidisciplinary FATE group. The accronym stands for “fairness, accountability, transparency and ethics,” and it’s examining how an ethical AI could work.

A Feedback Loop

Instituting audits and transparency is a step, but we’ll need to keep taking more and more steps to reduce a problem that will likely never be fully solved.

Track Down the Subtleties

So, find the issues and you’ll solve the problem, right? Not so fast. Even “highly reliable algorithms,” Ben notes, can turn up the wrong answer. Picture a model that’s 99 percent accurate reviewing one million records. It’ll get 10,000 incorrect results, and it won’t be clear which ones those are. One method to address bias is to adjust a model’s goals rather than the data. Data scientists and platform vendors, Ben says, are actively working to fight fire with fire. One demo from a major machine learning vendor at the Gartner ITXPO earlier this month debuted a model designed to monitor other machine-learning models, identifying their biases. “In a bit of irony,” Ben says, “the platform uses its own machine learning to generate the changes for the biased machine-learning models. Tools like this platform are what will ultimately help us address the bias that shows up in data when we accurately represent our current society and history. The problems here are hard, but they are tractable.” 

Where Next For AI?

The bottom line? AI’s reflection of our biases can be addressed, but only through a slow process that emphasises feedback, audits, fact-checks, and transparency. It’s not unlike how prejudices must be addressed across the rest of society, too.