Home | News

OpenAI unveils plans for tackling abuse ahead of 2024 elections

Sara Fischer

Axios

Jan 15, 2024

Character.ai, openAI ChatGPT and Runway RunwayML app on screen. Assorted AI software applications

ChatGPT maker OpenAI says it's rolling out new policies and tools meant to combat misinformation and abuse ahead of 2024 elections worldwide.

Why it matters: 2024 is one of the biggest election years in history — with high-stakes races in over 50 countries globally. It's also the first major election cycle where generative AI tools will be widely available to voters, governments and political campaigns.

What's happening: In a statement published Monday, OpenAI said it will lean into verified news and image authenticity programs to ensure users get access to high-quality information throughout elections.

  • The company will add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images created using its image generator tool, DALL-E 3.
  • The firm says it's experimenting with a new "provenance classifier" tool that can detect AI-generated images that have been made using DALL-E. It hopes to make that tool available to its first group of testers, including journalists, researchers and other tech platforms, for feedback.
  • OpenAI will continue integrating its ChatGPT platform with real-time news reporting globally, "including attribution and links," it said. That effort builds on a first-of-its-kind deal announced with German media giant Axel Springer last year that offers ChatGPT users summaries of select global news content from the company's outlets.
  • In the U.S., OpenAI says it's working with the nonpartisan National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for authoritative information on U.S. voting and elections.

Between the lines: Because generative AI technology is so new, OpenAI says it's still working to understand how effective its tools might be for political persuasion.

To hedge against abuse, the firm doesn't allow people to build applications for political campaigning and lobbying, and it doesn't allow engineers to create chatbots that pretend to be real people, such as candidates.