Can We Make Our Robots Less Biased Than We Are?

David Berreby

NY Times

Dec 6, 2020

One of the ways in which bias in artificial intelligence is manifesting itself is in racial profiling. Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. Not only is this an issue in criminal justice, where AI systems incorrectly accuse people of color of crimes, but it can be an issue of safety, too. Georgia Tech researches recently found that eight self-driving car systems were worse at recognizing people with darker skin tones than paler ones.