To Safeguard Democracy, Political Ads Should Disclose Use Of AI
Share and Follow

U.S. Representative Yvette Clarke (D-N.Y.) recently proposed a bill that would require disclosure of use of artificial intelligence (AI) in the creation of political advertisements. This is a timely bill that should garner bipartisan support and help safeguard our democracy. Recent advancements in language modeling, exemplified by the popularity of ChatGPT, and image generation, exemplified by Dall-E 2 and Midjourney, make it much easier to create texts or images that are intentionally misleading or false. Indeed, there are examples of people already using these technologies to spread fake political news in the U.S. and abroad. In early April a number of fake AI generated images of President Trump mug shots circulated online, and later that month the Republican National Committee (RNC) responded to President Biden’s re-election campaign with an AI-generated ad. In May there were accusations of AI being used in deceptive political ads in the lead up to Turkey’s elections.

Why This Time is Different

Fake news stories and doctored photos are not new phenomena. A well-known technique at fashion magazines is to digitally alter or “touch up” a celebrity’s appearance on a magazine cover. The goal is to drive magazine sales, and perhaps also drive publicity for the celebrity. Elections are a different matter that involve more consequential outcomes.

Allegations of fake news were rampant during the 2016 and 2020 U.S. elections. Perhaps as a result, distrust of the media has increased. According to Pew Research, Republican voters have experienced a particularly large drop in trust in news organizations. While some of this may be due to some politicians talking about news outlets as “fake news” (whether actually fake or not) some is likely also due to some exposure to or experience with actual fake news stories.

ADVERTISEMENT

The decline in trust in news is troubling. As noted in recent remarks by President Obama, “over time, we lose our capacity to distinguish between fact, opinion and wholesale fiction. Or maybe we just stop caring.” Declining trust in news is coinciding with the advent of more sophisticated AI tools that can be used to create content that looks more and more realistic. So-called “deepfakes” are digitally altered photos or videos that replace one person’s likeness with somebody else’s. It has become much harder to spot these fakes. The tools to create these do not require much expertise, meaning that the barrier to entry is low for someone to create lots of hard to detect AI-manipulated or AI-generated content. Without regulation, this problem seems destined to get worse before it gets better.

How Disclosure Can Help

One solution is to require disclosure of the use of AI in political ads. In fact, the RNC did just this by including the disclaimer “built entirely with AI imagery” in its ad, suggesting there may be bi-partisan support for a bill on the disclosure of use of AI. Disclosing AI use should not be costly to advertisers. The technology to label content created by AI already exists, according to Hany Farid, a University of California computer scientist. In March, Google said it would include watermarks inside images created by its AI models.

One issue that will need to be addressed is: what counts as “AI”? The current text of Representative Clarke’s proposed bill skirts this issue, describing it as “the use of artificial intelligence (generative AI).” Generative AI typically entails the use of image generation or large language modeling, but there isn’t, yet, a widely agreed definition. Hopefully implementation of the bill can run parallel to crafting a more precise definition of generative AI. In the words of New York University’s Julia Stoyanovich, “Until you try to regulate [Artificial Intelligence], you won’t learn how.” In other words, we have to start somewhere. Given the speed with which AI technology is advancing and the decline in trust in news, it is important to move quickly.

ADVERTISEMENT

Will disclosure of the use of AI in political ads matter to voters? It’s too early to tell. But, at least voters will have information about how the ad was created and can individually use that information to assess the advertisement. The U.S. Federal Election Commission (FEC) already requires certain disclaimers on political ads—though not yet about the use of AI—and handles enforcement through audits and investigation of complaints.

The fact that the U.S. already requires various disclosures on political ads suggests we as a country believe that providing extra information to voters is an important safeguard for our democracy. Disclosing the use of AI would align current requirements with what is technologically feasible.

Share and Follow