Home | Be Informed | Big Business

Context reduces racial bias in hate speech detection algorithms

Caitlin Dawson

Science Daily

Nov 25, 2020

Social media hate speech detection algorithms, designed to stop the spread of hateful speech, can actually amplify racial bias by blocking inoffensive tweets by Black people or other minority group members. One study showed that AI models were 1.5 times more likely to flag tweets written by African Americans as "offensive" compared to other tweets. A team of University of Southern California researchers has created a hate speech classifier that is more context-sensitive, and less likely to mistake a post containing a group identifier as hate speech. The researchers programmed the algorithm to consider two additional factors: the context in which the group identifier is used, and whether specific features of hate speech are also present, such as dehumanizing and insulting language.

Join us.

To stay informed about the ways in which AI is affecting you and your community, sign up for our newsletter. Now is the time to keep updated on AI in the interest of our communities.

Our Partners:

AIandYou proudly partners with a diverse group of scientists, researchers and engineers to amplify their work, support their programs and create a platform for them to discuss AI with the community.

Black in AI
Latinx in AI
Queer in AI
Women in Machine Learning
Women in AI

Proudly Supported By:

Amazon
Chan Zuckerberg Initiative
Microsoft

We are proud to be sponsored by some of the world's leaders in AI and AI-related fields. These organizations are drawing the maps for an unknown world. By recognizing the need to engage communities of color, these partners are ensuring a more equitable AI future for everyone.

Become a Sponsor