Can We Make Our Robots Less Biased Than We Are?

David Berreby

NY Times

Dec 6, 2020

One of the ways in which bias in artificial intelligence is manifesting itself is in racial profiling. Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. Not only is this an issue in criminal justice, where AI systems incorrectly accuse people of color of crimes, but it can be an issue of safety, too. Georgia Tech researches recently found that eight self-driving car systems were worse at recognizing people with darker skin tones than paler ones.

Join us.

To stay informed about the ways in which AI and new technologies is affecting you and your community, sign up for our newsletter. Now is the time to keep updated on AI and new technologies in the interest of our communities.

Partners

World Economic Forum
National Urgan League
Hispanic Federation
NAMIC
National Fair Housing Alliance
Black in AI
Queer in AI
Latinx in AI
Women in AI
Women in Machine Learning

Supporters

Amazon
Meta
Chan Zuckerberg Initiative
Microsoft
AirBNB

We are proud to be sponsored by some of the world's leaders in AI and AI-related fields. These organizations are drawing the maps for an unknown world. By recognizing the need to engage communities of color, these partners are ensuring a more equitable AI future for everyone.

Become a Sponsor