Home | News

Researchers, activists try to get ahead of AI-driven election misinformation

Ryan Heath

Axios

Oct 10, 2023

New products, guides and accountability initiatives are flooding the inboxes of election authorities and participants in response to the wave of generative AI tools that have been released in 2023.

Why it matters: Major tech companies have been cutting back their internal investments in election integrity work, and the newest AI companies lack the resources and relationships to effectively manage the risks their tools pose to elections.

  • AI deepfakes moved from curiosity to serious problem in the Slovakian election Sept. 30, including a fake video that purported to show the defeated candidate buying votes.
  • Audio deepfakes became a flashpoint at the U.K. Labour Party's annual conference, when fake audio of Keir Starmer — the poll favorite to become Britain's next prime minister — was circulated purporting to show him bullying staff and criticizing the conference's host city.

What's happening: Columbia University and Sciences Po in Paris have launched an innovation lab to monitor AI influence on elections and "design and test interventions that strengthen democratic societies."

  • Led by Rappler CEO and Nobel Peace Prize winner Maria Ressa and Camille François, a researcher known for her work uncovering Russia's 2016 election disinformation campaign, the lab is part of a digital literacy project backed by $3 million from the French government.
  • The Integrity Institute, led by former Meta elections staff, has expanded its election integrity best practices guide — urging platforms and new AI players alike to set public benchmarks for their efforts.
  • AIandYou, founded by National AI Advisory Committee member Susan Gonzales, plans a campaign showing viewers what AI-generated deepfake election ads look like. The end goal, Gonzales told Axios, is to educate the target audience of young people of color, who can then educate older family members.
  • Services like Nooz.ai have debuted features that perform language analysis of news stories and official documents to help users spot manipulation efforts.

Be smart: The economics of misinformation favor AI.

  • AI companies and platforms could try to make it costlier for malicious actors to generate fake posts than to have people write them.
  • But Georgetown researcher Micah Musser calculates that would require AI companies and platforms detect and impose penalties (such as blocking model access) on around 10% of the fake posts.

What they're saying: François told Axios she'll be investigating "actual risks and harms LLMs pose," aiming to "bridge the knowledge gap between experts on democratic theory and developers" of AI.

  • "If you are at a platform, you need to have metrics and should be prepared to show your work publicly," Integrity Institute's Katie Harbath said.