The Biden Deep Fake Underscores the Risk Artificial Intelligence Poses in an Election Year
Feb 1, 2024
It's bound to be an election year like no other. Thanks to progress in the realm of artificial intelligence, the amount of misinformation plying the electorate is expected to escalate.
Two days before voters descended onto the polls to cast their ballots in the New Hampshire primaries, a 39-second robocall masquerading as President Joe Biden made the rounds -- calling for voters to stay home. Of course, it wasn't actually the president, and the damage inflicted wasn't fatal to Biden's polling. He emerged victorious despite not even making it onto the ballot.
So who sent the Biden call?
It's an answer that authorities are currently investigating. The origins of the call aren't yet known, though the text-to-speech (TTS) system used to mimic Biden's voice was traced back to ElevenLabs, an AI voice generator startup based in London.
Atlanta-based security and authentication company Pindrop was the outfit that pinpointed the system by using its deepfake detection engine. Pindrop took the audio clip and broke it down into 155 bite-sized segments spanning 250 milliseconds, analyzing the segments against other TTS systems until it found its match.
While it's not clear how many voters received the call, the whole event underscores a deeper problem with AI's potential to thwart the polls and potentially amplify voter suppression. Congress is acting. Last year, the Senate held a number of AI learning sessions on Capitol Hill. It's widely expected that Congress will release a clearer framework for regulating the technology sometime this year.
AI-facing companies are already working ahead of Congress' timeline. Just days before the Biden robocall went out, OpenAI laid out fresh groundwork for how the company behind ChatGPT intends to combat election misinformation.
OpenAI plans to tag content created by AI and is working to clamp down on deep fakes to prevent AI from impersonating presidential candidates. ChatGPT users that turn to the chatbot with election-facing questions may be nudged to information sources with more authority, like CanIVote.org.
While OpenAI's timing was fortuitous in getting ahead of the robocall, its framework might not be enough in this threat climate. There are no federal repercussions for those that create deep fakes, AIandYou CEO Susan Gonzales points out, which is why she argues that AI literacy -- that is, recognizing AI and understanding its context -- will be a foundation for this election so that "people understand how to protect their vote."
Read the full article here.