Context reduces racial bias in hate speech detection algorithms
Nov 25, 2020
Social media hate speech detection algorithms, designed to stop the spread of hateful speech, can actually amplify racial bias by blocking inoffensive tweets by Black people or other minority group members. One study showed that AI models were 1.5 times more likely to flag tweets written by African Americans as "offensive" compared to other tweets. A team of University of Southern California researchers has created a hate speech classifier that is more context-sensitive, and less likely to mistake a post containing a group identifier as hate speech. The researchers programmed the algorithm to consider two additional factors: the context in which the group identifier is used, and whether specific features of hate speech are also present, such as dehumanizing and insulting language.
Read the full article here.