Part II: The Abundance Institute AI and Elections Update
Tracking the Impact of AI on the 2024 Elections
Welcome to the second in our four-part series on AI and the 2024 elections. Part I was published on May 9, 2024, 180 days out from the November election. We will publish the remaining updates 60 and 30 days out from the November 5, 2024, general election.
The U.S. votes in the 2024 presidential election in 90 days. This will be the first U.S. presidential election since the popularization of generative AI, which gives unprecedented access to tools capable of creating realistic, high-quality audio, visuals, and text. As noted in the introduction of Part I, the launch of ChatGPT in November 2022 sparked a wave of concerned statements from technologists, policymakers, and cultural commentators.
This update covers instances of AI-generated content between May 9 and August 7, 2024.
Obviously, there have been significant changes in the campaign landscape during that period – perhaps the most significant changes in any Presidential race in recent memory. Former President Trump survived an assassination attempt. President Biden withdrew from the campaign after a disastrous debate performance. Trump picked Ohio Senator J.D. Vance as his running partner. The Democratic party coalesced around Vice President Kamala Harris as their candidate. The underlying facts of this election have been dynamic and fast-changing. What role has AI-generated content played, if any?
In our dataset, the most notable example of AI-generated content in the 2024 U.S. election remains the January 2024 Biden robocall in New Hampshire, the perpetrator of which has since been fined $6 million and faces more than two dozen criminal charges. No incident since we began this tracker has had a discernible effect on voter behavior or belief. Still, media and policymakers' concerns about the role of generative AI in political campaigns continue. One new development in the current period: the past two months have seen several high-profile instances of real-world activity characterized as AI-generated or otherwise “faked.”
Methodology
Our methodology has changed only slightly from that described in Part I. No substantive changes have been made to how we gather and track instances of AI election material. We have, however, dropped the automated sentiment analysis, which in Part I characterized each article as “neutral”, “negative,” or “positive.” Upon further study and consultation with other experts, we found the analysis inaccurate and not particularly informative.
We have also upgraded public access to the dataset to a more user-friendly and attractive format.
Identified Incidents in this Period
Biden Cheap Fakes
The largest source of politically motivated imagery during the covered period were “cheap fakes,” many of them targeting President Biden’s age. In early June, prior to the Trump / Biden debate which led to President Biden dropping out of the race, the Washington Post published an article titled How Republicans used misleading videos to attack Biden in a 24 hour period. This article was addressed in a press briefing by White House Press Secretary Karine Jean-Pierre on June 17th to describe the media’s awareness of Biden cheap fakes. The article reported that three videos of President Biden had been taken out of context to make the President appear infirm. The first video shows Biden squatting and reaching behind him while attempting to find his chair. The second video shows Biden being pulled away by the first lady during a D-Day ceremony. The final video reported by The Washington Post showed Biden with his eyes closed at the D-Day ceremony, making it appear that he is sleeping.
As the Post article noted, “The videos taken from Biden’s D-Day event relied on taking the videos out of context as opposed to editing them in a misleading way.” Creating these clips did not require generative AI, sophisticated technology, or technical knowledge. Similar videos could be created on any smartphone with standard video editing tools.
Fears of Tampering with Biden-Hur Audio
The Department of Justice has claimed that releasing an audio recording of Special Counsel Robert Hur interviewing President Biden could increase the risk of deepfake content. In a recent court filing, DOJ argued that “[i]f the audio recording is released here, it is easy to foresee that it could be improperly altered, and that the altered file could be passed off as an authentic recording and widely distributed.” This risk, they claimed, “is exacerbated by the fact that there is now widely available technology that can be used to create entirely different audio ‘deepfakes’ based on a recording” even though “other raw material to create a deepfake of President Biden’s voice is already available.”
DOJ Disruption of Russian Government-Operated Social Media Bot Farm
On July 9th 2024, the Justice Department seized two domain names and 968 social media accounts used by Russian actors to spread disinformation in the U.S. and other countries. The press release states, “[t]he social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives.” The FBI and its international partners released a joint cybersecurity advisory with details about the operation, including its AI components, and expect that “[t]he advisory will allow social media platforms and researchers to identify and prevent the Russian government’s further use of the technology.”
The bot farm used a software stack named “Meliorator” that applied AI (including generative AI) to create false personas on social media platforms and disseminate posts of these personas. The Joint Cybersecurity Advisory released a report on the technical details of the bot farm. The FBI-provided examples of Russian-government narratives spread by the bot farm all focused on the Ukrainian conflict. None of the examples involved U.S. election misinformation.
Content Related to the Attempted Assassination of Donald Trump
Presidential candidate Donald Trump was the target of an assassination attempt on July 13, 2024. This significant political event understandably received enormous press coverage and cultural attention. It also provides some evidence about the potential impact of generative AI on the upcoming election. A seismic political event like this creates many opportunities for misinformation and disinformation. Countless conspiracy theories have already populated the information ecosystem. How was generative AI used in the wake of the attempt?
Since the assassination attempt, two notable pieces of content have been alleged to use generative AI. The first involved a video of a young man posing as the shooter. The video shows him stating, “I hate Republicans, I hate Trump, and guess what? You got the wrong guy.” On July 14, one day after the shooting, Forbes published an article stating that the video was not of the real shooter. Other traditional media outlets have done little reporting on this video. No one has investigated the source of the video. There are two X posts from BBC journalist Shayan Sardarizadeh stating, “This is not a video of the shooter at Trump's Pennsylvania rally. This is the exact same X user who pretended to be the shooter in an odd trolling attempt. He posted this video an hour ago and quickly deleted it. It's now being shared as if it's a message by the real shooter.” It appears that the video is of a person who looks similar to the shooter. There is no evidence that any AI technology was used.
The second instance involves manipulations of the now-famous photograph of Donald Trump standing up and pumping his fist shortly after the shooting. The altered image depicts a smiling secret service agent; in the original that agent has a neutral or concerned expression. The two images can be seen side-by-side in an X post by journalist Mike Rothschild. The image creator is unknown; It is also unclear whether generative AI tools were used, although some have characterized the image as AI disinformation.
Parody Kamala Harris Campaign Ad
On July 26, 2024, a Ronald Reagan parody account posted a fake Kamala Harris campaign advertisement that mixes real video and audio with an AI-generated clone of Harris’s voice. The video narration opens with, “I Kamala Harris am your Democrat candidate for President because Joe Biden finally exposed his senility at the debate, thanks Joe.” The original post explicitly stated that it was a parody. It was retweeted by Elon Musk and has since received 24 million views on X. The parody received widespread media attention and has provoked calls for regulation, including by California Governor Gavin Newsom, who publicly stated, “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.”
Conclusion
90 days from the national presidential election, generative AI has not yet transformed the U.S. electoral process. None of the events described in this update have created an identifiable material impact on voter beliefs or outcomes. Two of these incidents (the Russian bot farm and the Harris ad parody) appear to have used generative AI systems to produce the content. In the others, there is no evidence that AI was used or necessary.
No one can precisely predict how people will use generative AI technologies to create election-related content. It is also difficult to determine how such content would impact the 2024 election. Early media coverage of an event can leave a lasting impression even if later investigations undermine that impression. The Abundance Institute will continue to track usage of generative AI in the context of the 2024 election, running down all of the reported facts around any particular incident. This factual grounding can be useful to avoid sensationalist perspectives and to understand the legitimate impact that generative AI is having on the U.S. electoral process.
Invitation to Collaborate
If there are any stories we missed or you would like to help our efforts, please fill out this Google Form. If there are search terms we should consider including please let us know. As the November election approaches in the U.S. we would especially appreciate your help in sending us examples of AI use in election material and examples of media stories that cover AI in elections. Our updates and tracking are primarily focused on U.S. elections although international stories are helpful for context. We welcome your help!