Home | Be Informed | AI Bias

AI Bias

AI Bias and You

Bias is prejudice. It’s the unfair weighing in favor or against a person or idea. Think of how law enforcement tends to be more aggressive towards people of color in the U.S. or how locals tend to discriminate against refugees and immigrants in Europe. Bias is a part of our world. It’s a sad fact that we have to accept — and figure out how to identify solutions to prevent it from taking place.

Read More

AI can perpetuate racial bias in insurance underwriting

Yahoo! Money | Nov 1, 2022

While artificial intelligence (AI) technology has made mortgage underwriting and insurance claims faster and easier, it could also unintentionally discriminate against protected classes.

Advocating for the LGBTQ+ community in AI research

Deep Mind | Jun 1, 2022

Research scientist, Kevin McKee, tells how his early love of science fiction and social psychology inspired his career, and how he’s helping advance research in ‘queer fairness’, support human-AI collaboration, and study the effects of AI on the LGBTQ+ community.

A Move for 'Algorithmic Reparation' Calls for Racial Justice in AI

Wired | Dec 23, 2021

Researchers are encouraging those who work in AI to explicitly consider racism, gender, and other structural inequalities.

Who Is Making Sure the A.I. Machines Aren’t Racist?

New York Times | Mar 16, 2021

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Clearview AI sued in California over ‘most dangerous’ facial recognition database

Chicago Sun-Times | Mar 11, 2021

Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission.

Facial recognition can help restart post-pandemic travel. Here's how to limit the risks.

World Economic Forum | Dec 16, 2020

Facial recognition technology is increasingly being used by transportation companies to mitigate passenger fears about COVID-19 transmission by offering contactless identity verification. Though the potential benefits of this technology are robust, its implementation comes with some significant worries. The widespread use of facial recognition tech raises concerns over privacy, runaway surveillance, and racial profiling. In this article, the World Economic Forum outlines a framework for how to regulate this technology, including self-assessment questionnaires and the creation of an audit framework that would help validate the effectiveness of the process.

The Coded Gaze with Joy Buolamwini

Stanford HAL 2019 Fall Conference | Dec 7, 2020

In this video, AI ethics researcher Joy Buolamwini speaks at the 2019 Stanford Institute for Human-Centered Artificial Intelligence.

Coded Bias: A Documentary

AJL | Dec 7, 2020

When MIT researcher, poet, and computer scientist Joy Buolamwini uncovers racial and gender bias in artificial intelligence systems sold by big tech companies, she embarks on a journey alongside pioneering women sounding the alarm about the dangers of unchecked artificial intelligence that impacts us all. Through Joy’s transformation from scientist to steadfast advocate and the stories of everyday people experiencing technical harms, Coded Bias sheds light on the threats A.I. poses to civil rights and democracy.

A Simple Tactic That Could Help Reduce Bias in AI

Harvard Business Review | Dec 6, 2020

It's well-established that AI-driven systems are subject to the biases of their human creators - we unttingly write biases into systems by training them on biased data or with rules created by experts with implicit biases. The good news is that more strategic use of AI systems can give us a fresh chance to identify and remove decision biases from the underlying algorithmsf, even if we can't remove them completely from our own habits of mind.

How the 2020 election could impact US initiatives to address algorithmic bias

Business Insider | Dec 6, 2020

Although AI algorithms seem objective, they may contain biases. The biases can be inadvertent, such as a computer vision program that works less well at identifying minorities or women due to skewed sets of training photos. They can also be insidious, such as programs that reinforce racist home lending practices based on redlined historic data.

Dealing With Bias in Artificial Intelligence

NY Times | Dec 6, 2020

In this article, the New York Times speaks to three prominent women in the artificial intelligence industry to hear how they approach bias in their work. Daphne Koller is a co-founder of the online education company Coursera, and the founder and chief executive of Insitro, a company using machine learning to develop new drugs. Olga Russakovsky is an assitant professor in the Department of Computer Science at Princeton University who specializes in computer vision and a co-founder of the AI4ALL foundation that works to increase diversity and inclusion within AI. Timnit Gebru is a former research scientist at Google on the ethical AI team and a co-founder of Black in AI, which promotes people of color in the field.

Can We Make Our Robots Less Biased Than We Are?

NY Times | Dec 6, 2020

One of the ways in which bias in artificial intelligence is manifesting itself is in racial profiling. Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. Not only is this an issue in criminal justice, where AI systems incorrectly accuse people of color of crimes, but it can be an issue of safety, too. Georgia Tech researches recently found that eight self-driving car systems were worse at recognizing people with darker skin tones than paler ones.

Join us.

To stay informed about the ways in which AI and new technologies is affecting you and your community, sign up for our newsletter. Now is the time to keep updated on AI and new technologies in the interest of our communities.


World Economic Forum
National Urgan League
Hispanic Federation
National Fair Housing Alliance
Black in AI
Queer in AI
Latinx in AI
Women in AI
Women in Machine Learning


Chan Zuckerberg Initiative

We are proud to be sponsored by some of the world's leaders in AI and AI-related fields. These organizations are drawing the maps for an unknown world. By recognizing the need to engage communities of color, these partners are ensuring a more equitable AI future for everyone.

Become a Sponsor