Part IV: The Abundance Institute AI and Elections Update
Tracking the Impact of AI on the 2024 U.S. Election
Welcome to the final pre-election installment of our four-part series on AI and the 2024 election. Part I was published on May 9, 2024, 180 days out from the November election. Part II was published on August 7th, 2024, 90 days out from the November election. Part III was published on September 6th, 2024, 60 days out from the November election. We plan to conduct a post-election analysis shortly after November 5.
The U.S. votes in the 2024 presidential election in about 30 days. This will be the first U.S. presidential election since the popularization of generative AI, which gives unprecedented access to tools capable of creating realistic, high-quality audio, visuals, and text. As noted in the introduction of Part I, the launch of ChatGPT in November 2022 sparked a wave of concerned statements from technologists, policymakers, and cultural commentators.
This update covers instances of AI-generated content between September 6th and October 7th, 2024. Our database added approximately 15,000 new articles and other media hits during this period. There was one new paid political advertisement that used digitally altered (and possibly artificially generated) content during this period.
Senator Braun’s (R-Ind) Digitally Altered Campaign Advertisement
On October 1, 2024, Politico reported that Senator Mike Braun of Indiana employed digitally altered imagery in a campaign advertisement for his gubernatorial run. The ad depicted his Democratic opponent, Jennifer McCormick, speaking at a podium while supporters held signs reading "NO GAS STOVES!"—a scene that never actually took place. The original photograph showed McCormick at a rally with her standard campaign signs.
It remains unclear whether generative AI was used to create the altered image. The ad initially aired approximately 100 times without any disclosure of the digital manipulation.
After media inquiries brought attention to the lack of disclosure, Braun's campaign released a new version of the ad that included a disclaimer acknowledging that the content had been digitally altered. Josh Kelley, a senior advisor for the campaign, attributed the omission to an error, stating, "An earlier version was mistakenly delivered to TV stations and is being replaced with the correct version."
Kelley also criticized the scope of a new Indiana law that requires any campaign materials containing digitally altered images or deepfakes to include a clear disclaimer. He argued that the law could be interpreted too broadly. "The law seems to imply that any image or video not exactly as it was originally printed or aired could be a violation," he said. He further accused McCormick's campaign of altering Braun's appearance in their advertisements without providing specific examples; our tracker did not pick up any instances of McCormick ads altering Braun’s appearance.
CNN Broadcasts an Altered Photo of Donald Trump
According to the fact-checking site Snopes, CNN aired a doctored photo of Donald Trump with conservative activist Laura Loomer. In the image, Trump appears significantly overweight. CNN credited the image to an X user who had posted it on September 13, where it received 1 million interactions. The original post by Laura Loomer did not include the lower half of Trump's body. The altered image seemed to add this portion while exaggerating his physique.
CNN aired the altered image at least three times during programs including "The Lead with Jake Tapper," "Anderson Cooper 360," and "Smerconish." All three anchors later apologized for featuring the manipulated image. While Snopes suggested that AI technology may have been used to alter the image, there is no concrete evidence to confirm this; traditional photo editing software could have achieved the same result.
The Meme-ification of Political Deepfakes
Our media tracker recorded a significant spike in activity following the debate between presidential candidates Donald Trump and Kamala Harris. While most of the data collected during this period was not attributed to politically driven deepfakes, some content did involve generative AI, primarily in the form of memes. A prevalent headline during this time was "AI is helping shape the 2024 presidential race. But not in the way experts feared," which was featured in numerous media outlets.
One of Trump's most viral lines during the debate was:
"In Springfield, they are eating the dogs. The people that came in, they are eating the cats. They're eating—they are eating the pets of the people that live there."
The internet quickly latched onto this statement. AI-generated images circulated widely on X (formerly Twitter), including:
An image of a cat holding a "Kamala Hates Me" sign, which received 104 million interactions.
Trump embracing a cat and a duck protectively, garnering 87 million interactions.
Trump holding a cat while running from a crowd, which received 1 million interactions.
While these images were AI-generated, the trend also spread through other mediums. Videos of real dogs and cats with Trump's statement playing in the background became popular on TikTok.
Generative AI Music Videos and Commercials with Politicians
Politicians continue to be prominently featured in generative AI content. An X and YouTube channel named The Dor Brothers has been posting generative AI music videos and commercials, gaining significant traction recently. Their first music video was posted on July 24, 2022, but their popularity has surged in recent months. Their most popular video, released on August 21, 2024, has received 1 million interactions on X. It features a montage of cultural and political figures—including Donald Trump, Kamala Harris, Joe Biden, Elon Musk, Mark Zuckerberg, Vladimir Putin, Barack Obama, and the Pope—depicted robbing stores and subsequently being arrested, all set to rap music.
Other notable videos they've created include:
“One Hand Killing”—showing political figures performing a metal song.
"Vote Smart, Save the Cats"—a parody Trump campaign advertisement featuring cat owners advocating for Trump.
The Dor Brothers openly state that they use generative AI tools to create their videos, often citing the specific AI tools used when posting on their X account.
Fake Image of Harris and Sean “Diddy” Combs
On September 20, 2024, NBC News reported that Donald Trump reposted a fabricated image of Kamala Harris with Sean "Diddy" Combs. The original post on Truth Social has since been deleted. The image was an altered version of a real 2001 photo of Harris and Montel Williams, with Williams's face replaced by Combs's. This occurred amid significant media attention on Combs, who had been arrested and charged with serious offenses.
There is no evidence to suggest that generative AI was used to create this image; simple photo editing tools could have been used.
Office of the Director of National Intelligence Report
A mid-September 2024 report by the Office of the Director of National Intelligence (ODNI) highlighted increased activity by foreign actors using AI-generated content in influence operations:
Russia has been particularly active, creating AI-generated text, images, audio, and video to boost former President Trump's candidacy and disparage Vice President Harris and the Democratic Party through conspiratorial narratives and divisive issues like immigration.
Iran has used AI to craft social media posts and inauthentic news articles targeting U.S. voters on polarizing topics such as the Israel-Gaza conflict and critiques of presidential candidates.
China, while not specifically targeting the election outcome, has employed AI in broader influence operations to shape global perceptions and amplify divisive U.S. political issues.
For example, the ODNI noted that Russian actors staged a video in which a woman falsely claimed she was the victim of a hit-and-run accident involving Vice President Harris. They also altered videos of Harris's speeches to misrepresent her statements. The ODNI concluded that these technologies have not fundamentally transformed foreign influence operations. The most impactful efforts still involve strategically disseminating divisive content to exacerbate social and political tensions. The risk posed by AI-generated content depends on these actors' abilities to bypass AI safeguards, develop advanced models, and spread content effectively without detection.
Conclusion
As we approach the 2024 U.S. presidential election, our four-part series has tracked the evolving role of generative AI in the electoral process. Throughout Parts I to IV, we've observed that while generative AI tools have become more accessible and sophisticated, their impact on election-related information dissemination remains limited.
Part I (180 days out) identified a few instances where generative AI was used to create misleading content, such as the Biden robocall incident in New Hampshire. However, these incidents had no discernible effect on voter behavior or perceptions. Concerns about AI disrupting the election had not materialized, and the perpetrator of the robocall faced a $6 million fine from the FCC, indicating legal accountability.
Part II (90 days out) showed a largely unchanged landscape. The most notable developments involved "cheap fakes" and manipulated media that didn't require sophisticated AI tools. Even significant events, like the attempted assassination of former President Trump, did not lead to widespread AI-generated misinformation. Instances where AI was used, such as the parody Kamala Harris campaign ad, were quickly identified and addressed.
Part III (60 days out) saw the release of Grok-2 and its integration into the X platform, a significant technological development. While Grok-2 allowed for less restrictive content generation, including controversial images of political figures, the anticipated surge of AI-generated disinformation did not occur—likely due to the unrealistic nature of Grok images. Major actors, including foreign entities, continued to rely on traditional influence methods, as highlighted by Microsoft's Threat Intelligence Report.
Part IV (30 days out) observed one political advertisement that employed digitally altered or artificially generated content. Additionally, an uptick in AI-generated memes and content was observed, especially following the debate between Trump and Harris. However, much of this material was satirical or parodic, contributing more to internet culture than to significant shifts in voter opinions. The ODNI reported that foreign actors like Russia and Iran are employing generative AI to enhance their influence operations but noted that AI has not fundamentally transformed these efforts. The most impactful strategies still involve traditional methods of sowing division.
Concluding Thoughts (For Now)
While generative AI has introduced new dimensions to content creation, its feared disruptive impact on the 2024 U.S. presidential election has not materialized in a significant way. Traditional methods of misinformation and influence remain the primary tools for those seeking to sway voter opinions. We will continue to track instances of AI in the US election as election day approaches and plan to report on the data in early November.
Invitation to Collaborate
If there are any stories we missed or you would like to help our efforts, please fill out this Google Form. If there are search terms we should consider including please let us know. As the November election approaches in the U.S. we would especially appreciate your help in sending us examples of AI use in election material and examples of media stories that cover AI in elections. Our updates and tracking are primarily focused on U.S. elections although international stories are helpful for context. We welcome your help!