Topline
Billions of people are voting in elections across the world this year, which experts fear could be undermined by increasingly popular artificial intelligence tools like ChatGPT and the misinformation they enable, prompting some of the world’s biggest tech companies to publicly disclose how they plan to moderate content.
Companies are introducing policies to limit the impact of AI on elections.
Getty Images
Key Facts
OpenAI, the company behind popular generative AI products including chatbot ChatGPT and image generator Dall-E, has said it will ban politicians, political campaigns and lobbying using its tools and stop applications that might deter people from voting.
Users will also be prohibited from impersonating candidates, officials or governments using the tools and the company said it will introduce authentication tools that will help voters assess whether to trust an image.
Meta, whose family of apps include Facebook, Instagram, Threads and WhatsApp, claims said it will continue longstanding practices in 2024 like labeling state-controlled media, blocking ads from such outlets targeting people in the U.S. and barring new political ads in the final week of the American campaign, as well as requiring advertisers to disclose whether AI or other digital tools were used to create or alter content for political, social and election related advertisements.
Alphabet’s Google, which claims it was the first tech company to require election advertisers to prominently disclose whether content was digitally altered or generated with AI or other tools, said it will limit the types of election-related queries its AI chatbot Bard and other generative AI products can answer.
Video platform Youtube, which is also owned by Alphabet, will require content creators to disclose whether they’ve created realistic synthetic or altered content in 2024, so it can be labeled to tell viewers, a strategy the company said should stave off potential harms from “AI’s powerful new forms of storytelling.”
X, formerly known as Twitter, is regularly criticized for its failure to tackle high levels of falsehoods and misinformation spreading on the platform and last year scrapped tools for reporting election misinformation and fired its election integrity team, which billionaire owner Elon Musk claimed was actually “undermining election integrity.”
The platform promotes its crowdsourced fact-checking scheme Community Notes as the primary means of combating disinformation, a method that has been widely criticized as flawed, error prone and inadequate.
Microsoft said it will offer several services to help protect election integrity, including a tool to help candidates protect their likeness, authenticate content and safeguard both from digital manipulation, supporting and advising campaigns on working with AI and building a hub to support governments in delivering secure elections.
The company also said it will ensure its search engine Bing—whose recently rebranded AI chatbot, Copilot, returned fake and misleading information about recent elections in Europe—gives voters “authoritative” results.
TikTok, owned by Chinese giant ByteDance and largely shunned by politicians despite being a growing source of news, entertainment and debate for younger generations, said it doesn’t allow paid political ads (politicians themselves are allowed) and works with fact-checking organizations to help limit the spread of misinformation.
News Peg
More than half the world’s population are set to go to the polls in 2024 in what has been described as the biggest election year in history. More than 50 countries—including India, the U.S., Britain, Russia, South Africa, Mexico, Indonesia and Pakistan—are slated to hold national elections, and several democracies, including Bangladesh and Taiwan, have already voted. The outcomes of the year’s elections are expected to have significant and global ramifications for the future of democracy and a wide variety of areas including human rights, security and climate action. Not all of these elections are expected to be free and fair and even robust democracies have expressed fears that rising levels of disinformation online and rapidly advancing capabilities of AI tools could undermine the process.
Key Background
AI is ranked as one of the biggest global risks in 2024, largely due to its potential to disrupt elections. Generative AI, which produces content like sound or text from an input prompt, is of particular concern when it comes to ensuring election security over the tools that might be used to convincingly mimic or alter the voice and image of candidates. This year marks the first set of major elections since advanced generative tools were widely adopted by the public, such as image generators Dall-E and Midjourney and chatbots like ChatGPT and Bard. Many big platforms like Facebook and Twitter have been accused of failing to do enough to tackle the spread of misinformation during past elections and fears have grown that the consequences could be far worse now the technology has developed and spread. Deepfakes, fabricated AI content, have been around for some time but recent advances have rendered them both easier to make and more convincing. They have already emerged of Hillary Clinton (implausibly endorsing Ron DeSantis), President Joe Biden (announcing a military draft), former president Donald Trump (getting arrested and running from police). Campaigns are also deploying the technology themselves, such as when Republicans showcased a series of AI-generated images showing what the country would look like if Biden won.
Further Reading
2024 Is The Biggest Election Year In History—Here Are The Countries Going To The Polls This Year (Forbes)
Republicans Share An Apocalyptic AI-Powered Attack Ad Against Biden: Here’s How To Spot A Deepfake (Forbes)
Follow me on Twitter or LinkedIn. Send me a secure tip .