Home | Be Informed | AI Bias

AI Bias

AI Bias and You

AI Bias and You

Bias in artificial intelligence takes place in different ways. It can be written into an algorithm unknowingly as a result of a programmer's unconscious biases. Or it can come about as a result of old data that unwittingly reflects historical or societal inequalities.

Read More

Facial recognition can help restart post-pandemic travel. Here's how to limit the risks.

Facial recognition can help restart post-pandemic travel. Here's how to limit the risks.

World Economic Forum | Dec 16, 2020

Facial recognition technology is increasingly being used by transportation companies to mitigate passenger fears about COVID-19 transmission by offering contactless identity verification. Though the potential benefits of this technology are robust, its implementation comes with some significant worries. The widespread use of facial recognition tech raises concerns over privacy, runaway surveillance, and racial profiling. In this article, the World Economic Forum outlines a framework for how to regulate this technology, including self-assessment questionnaires and the creation of an audit framework that would help validate the effectiveness of the process.
The Coded Gaze with Joy Buolamwini

The Coded Gaze with Joy Buolamwini

Stanford HAL 2019 Fall Conference | Dec 7, 2020

In this video, AI ethics researcher Joy Buolamwini speaks at the 2019 Stanford Institute for Human-Centered Artificial Intelligence.
Coded Bias: A Documentary

Coded Bias: A Documentary

AJL | Dec 7, 2020

When MIT researcher, poet, and computer scientist Joy Buolamwini uncovers racial and gender bias in artificial intelligence systems sold by big tech companies, she embarks on a journey alongside pioneering women sounding the alarm about the dangers of unchecked artificial intelligence that impacts us all. Through Joy’s transformation from scientist to steadfast advocate and the stories of everyday people experiencing technical harms, Coded Bias sheds light on the threats A.I. poses to civil rights and democracy.
A Simple Tactic That Could Help Reduce Bias in AI

A Simple Tactic That Could Help Reduce Bias in AI

Harvard Business Review | Dec 6, 2020

It's well-established that AI-driven systems are subject to the biases of their human creators - we unttingly write biases into systems by training them on biased data or with rules created by experts with implicit biases. The good news is that more strategic use of AI systems can give us a fresh chance to identify and remove decision biases from the underlying algorithmsf, even if we can't remove them completely from our own habits of mind.
How the 2020 election could impact US initiatives to address algorithmic bias

How the 2020 election could impact US initiatives to address algorithmic bias

Business Insider | Dec 6, 2020

Although AI algorithms seem objective, they may contain biases. The biases can be inadvertent, such as a computer vision program that works less well at identifying minorities or women due to skewed sets of training photos. They can also be insidious, such as programs that reinforce racist home lending practices based on redlined historic data.
Dealing With Bias in Artificial Intelligence

Dealing With Bias in Artificial Intelligence

NY Times | Dec 6, 2020

In this article, the New York Times speaks to three prominent women in the artificial intelligence industry to hear how they approach bias in their work. Daphne Koller is a co-founder of the online education company Coursera, and the founder and chief executive of Insitro, a company using machine learning to develop new drugs. Olga Russakovsky is an assitant professor in the Department of Computer Science at Princeton University who specializes in computer vision and a co-founder of the AI4ALL foundation that works to increase diversity and inclusion within AI. Timnit Gebru is a former research scientist at Google on the ethical AI team and a co-founder of Black in AI, which promotes people of color in the field.
Can We Make Our Robots Less Biased Than We Are?

Can We Make Our Robots Less Biased Than We Are?

NY Times | Dec 6, 2020

One of the ways in which bias in artificial intelligence is manifesting itself is in racial profiling. Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. Not only is this an issue in criminal justice, where AI systems incorrectly accuse people of color of crimes, but it can be an issue of safety, too. Georgia Tech researches recently found that eight self-driving car systems were worse at recognizing people with darker skin tones than paler ones.

Join us.

To stay informed about the ways in which AI is affecting you and your community, sign up for our newsletter. We can't have the conversation without you.

Our Partners:

AIandYou proudly partners with a diverse group of scientists, researchers and engineers to amplify their work, support their programs and create a platform for them to discuss AI with the community.

Black in AI
Latinx in AI
Queer in AI
Women in Machine Learning
Women in AI

Proudly Supported By:

Amazon
Chan Zuckerberg Initiative
Microsoft

We are proud to be sponsored by some of the world's leaders in AI and AI-related fields. These organizations are drawing the maps for an unknown world. By recognizing the need to engage communities of color, these partners are ensuring a more equitable AI future for everyone.

Become a Sponsor