AI is No Reason to Limit Political Speech
A look at the three AI election bills before the Senate Rules Committee
Almost immediately after the release of ChatGPT in November 2022, pundits, academics, and politicians began to worry about how such generative artificial intelligence tools might affect elections. As our recent AI and Elections Update noted, more than 7,500 U.S. news articles since January 2024 discuss the risk of AI to elections, although actual incidents of generative AI in elections number in the single digits.
Still, the Senate Rules Committee will mark up three AI-related bills on Wednesday. The Protect Elections from Deceptive AI Act (PEDA Act) and the AI Transparency in Elections Act of 2024 (ATEA Act) claim to regulate AI, but in reality restrict political speech. These bills do little to protect democracy. More troubling, both present major First Amendment problems. There are better ways to address concerns about AI in elections. The third bill, the Preparing Election Administrators for AI Act (PEAA Act), is a small step in the right direction.
Let’s look at each bill.
The PEDA Act would ban certain political speech depending on what tool the speaker used to create it. Specifically, it prohibits distribution of certain political speech that includes AI-generated audio or visuals.
The bill claims to ban “deceptive” speech created using AI. (Set aside that the First Amendment doesn’t permit the government to pre-emptively ban deceptive political speech.) But it goes far beyond false statements.
I testified before Senate Rules on “AI and the Future of our Elections” back in September 2023 and offered feedback on an earlier version of PEDA. Among other things, I pointed out the problems that imposing a “per se defamation” standard raised. The current version drops that defamation language.
Yet the bill still contains many problems. Perhaps the worst is a dramatic expansion of political speech regulation. Federal campaign finance law has long focused on political advertising to avoid First Amendment concerns. The PEDA Act discards those concerns, expanding to all political speech. It would enable federal candidates for office to sue anyone who depicts or references them in any AI-generated content, whether paid advertising or not. Those sued could face injunctions to remove the content as well as damages and attorneys fees. Allowing candidates to sue citizens over unwanted political speech is anti-democratic and harmful to free speech rights.
There are several other fundamental problems with PEDA. First, the bill’s definition of “deceptive ai-generated audio or visual media” does not require the speech involved to be false. Truthful statements can be considered deceptive under this law. Second, the definition of AI is so broad as to sweep in commonly used recording and editing tools such as Adobe Premier or even so-called “smart erasers” that can change the background of an image or video. Third, the bill does not exempt authorized uses of data. Other problems include a “reasonable person” standard that is unconstitutionally vague and overbroad.
The bill also prohibits knowingly distributing such AI content to influence an election or raise funds. Certain “bona fide” or traditional news media are exempted — but not independent journalists, substackers or X.com users.
The end result is sweeping and unnecessary constraints on using powerful new tools for political speech. A person running for city treasurer who used Adobe Premier to automatically rework a recording of their own speech to eliminate “ahs” and “ums,” or to translate the video into another language could face expensive harassing lawsuits and legal judgments — even if the content of the speech itself was perfectly truthful.
These burdens will fall most heavily on upstart candidates, small publishers, and marginalized voices who otherwise would benefit most from powerful tools to generate professional-quality content quickly.
The second bill, the AI Transparency in Elections Act of 2024, would require an additional disclaimer on political ads that include AI-generated content. As in PEDA, the definition of “generative artificial intelligence” would sweep in many existing tools and thus would likely apply to a wide range of political ads. The bill attempts to limit the disclosure requirement to only certain types of “material” alterations, but generative AI tools are today everywhere, including embedded in the very cameras with which we take pictures. If this bill takes effect, the safest advice to every political advertiser will be to always include the required disclaimer if the speech was created using digital tools.
A disclaimer that is included in every ad creates no value for the reader or viewer. It also crowds out the speech of the advertiser. Political ads today are short and there are already mandatory disclosures for many types of political ads. Forcing the inclusion of yet another disclosure reduces the ability of candidates to get their message out.
The third bill, the Preparing Election Administrators for AI Act, takes a prudent “look before you leap” approach to AI and elections. Some of its provisions are redundant with existing authorities, but overall assessing the effect of AI on elections is a critical first step to determining whether regulation is even necessary. The Rules Committee should shelve the other bills until such an assessment is performed.
In summary, PEDA and ATEA are paternalistic bills that embody an elite distrust of Americans. They aim to "protect" people from technology that threatens the status quo. But democracy is about expanding speech, not restricting it from the political arena out of fear. I urge the committee to reject these misguided bills. It should instead focus its attention on bills like the PEAA Act, which could help inform congressional action to deal with AI impacts on elections.