AI Bias and You

Bias is prejudice. It’s the unfair weighing in favor or against a person or idea. Think of how law enforcement tends to be more aggressive towards people of color in the U.S. or how locals tend to discriminate against refugees and immigrants in Europe. Bias is a part of our world. It’s a sad fact that we have to accept — and figure out how to identify solutions to prevent it from taking place.

With the advent of artificial intelligence and computer science, we are now seeing the full extent of exactly how biased we really are in today’s world. AI algorithms and tools are presenting clear signs of racial, gender, and ethnical biases. If we do not take action now, these biases are only going to get worse.

We understand what bias is in people — but what is AI bias? Take a look at when AI tools, like chatbots or facial recognition systems clearly discriminate against certain groups. This phenomenon can be subtle, like when facial recognition systems that are trained to detect potential thieves misidentify at higher rates people of color. Or it can be explicit, like when AI bots become racist.

How does this bias arise? Is this an intrinsic problem with AI algorithms or the fault of the programmers? As appealing as blaming the machines or people can be, the major part of the problem is actually in our data.

AI needs huge amounts of data to be trained. Because of this, these technologies end up reproducing whichever biases are present in the data used to train them. You can think of it like a sort of mimicking machine. The AI isn’t aware of the things we are showing it, but it tries to understand them nonetheless, and so if the things we are showing the AI are themselves biases, be it in a racist or xenophobic way for example, the AI will inevitably reproduce that behaviour.

One real life example of this was found when analyzing the most famous groups of data used for training language-based AI, and scientists found that they contained multiple signs of gender and racial biases. They discovered that the AI models that used the biased data classified the sentence “He is a doctor” as much more likely to occur than “She is a doctor.” The same models also affirmed offensive comparisons like “Black is to criminal as Caucasian is to police” and “lawful is to Christianity as terrorist is to Islamic.” We can see that the AI trained with such data groups presented multiple types of biases against various ethnicities and social groups.

But the same AI models, when trained with data that was carefully filtered for bias, presented no such racial and gender discrimination. This goes to show that the biggest part of this problem is indeed in our data. AI is merely reproducing the many different biases we as a society already hold, so in a way, the problem of bias in AI is just a reflection of the bigger picture and shows that we live in an racist society.

Fortunately, there are many clever ways to deal with biased datasets, and a lot of research goes into finding new ways of training unbiased AI. But the biggest problem is whether or not big tech companies will take such precautions when creating their technologies, especially when those technologies are being applied to the real world, like the aforementioned facial recognition systems.

Bias is one of the biggest problems in AI. It’s up to us to inform ourselves about its dangers and hold companies and governments accountable for their actions.