As the United States prepares for its first presidential election in the age of generative AI, fears are growing about the potential impact of deepfakes and AI-generated content on voter perceptions. Recent incidents involving fabricated images of candidates and foreign disinformation efforts have underscored the challenges to electoral integrity in this new technological landscape.
The devastation wrought by Hurricane Helena in the southeastern United States two weeks ago left a trail of haunting images, but two pictures are likely to linger in the public consciousness more than any others. One depicted former President and Republican candidate Donald Trump in the disaster zone, standing knee-deep in floodwaters alongside rescue workers.
If you look carefully, you can see that this image is AI generated from the distorted hand and the fact that Donald Trump is helping someone other than himself. H/T https://t.co/QNKB4Y3Uyc pic.twitter.com/KWTK2wa3wR
— Simon Chesterman 陈西文 (@ProfChesterman) October 10, 2024
The other showed a small, weeping girl alone in a fragile wooden boat, clutching a tiny puppy. For many in the affected areas, the stark contrast between these images reinforced a sense that the current administration had forsaken them. Trump's picture was widely shared with the caption "hero," while the girl's image was accompanied by comments like "The administration has let us down again." There was just one snag with these powerful images: both were complete fabrications churned out by a rudimentary AI generator.
One surefire way to spot an AI image from the hurricane is if it shows Donald Trump helping people. @jordanklepper pic.twitter.com/THb6BaXtg1
— The Daily Show (@TheDailyShow) October 9, 2024
This marks the first US election unfolding in the era of generative AI (GenAI). Text and image generators like ChatGPT and Midjourney produce content on demand, setting them apart from any previous forgery technology. They can create images that challenge human perception and are accessible to anyone with an internet connection.
The list of AI-related electoral incidents is already growing. In August, Trump shared a series of images showing Taylor Swift fans wearing "Swifties for Trump" shirts, unaware they were AI-generated. This may have prompted the pop star to publicly back his rival, Harris. In an apparently unrelated development, at least one genuine image of a "Swiftie" supporting the Republican candidate surfaced after the incident. Later, Trump shared a photo purporting to show that the crowds at Harris's campaign rallies were "created using AI." An independent fact-check revealed the photo was, in fact, authentic.
(AI) Taylor Swift Endorses Kamala Harris, Challenges Trump's AI Deepfake
Following the first presidential debate, Taylor Swift took to Instagram to endorse Kamala Harris, explicitly calling out Donald Trump for sharing AI-generated deepfake images suggesting her support for him.… pic.twitter.com/ulxl3KfjeX
— tech guru (@technologiaguru) September 18, 2024
Conversely, allegations of AI manipulation have become a convenient excuse for some politicians. Georgia's lieutenant governor, Mark Robinson, attempted to dismiss an exposé of his past controversial statements by claiming it was "AI forgery." Ironically, this led to the broadcast of a campaign ad against Robinson that was itself entirely generated by AI – a first in political advertising.
Is there a technological fix for these forgeries? Israeli firm Revealense has developed AI-powered technology to detect hidden emotions in videos, which can also identify deepfakes. However, Amit Cohen, a VP at the company, tells Israel Hayom that the battle may already be lost when it comes to AI-generated still images. "Given their quality, there's no technological capability to identify a fake image in real-time based on pixel analysis," he explains. "The real challenge lies in videos and deepfakes, which can cause significant damage during sensitive periods like elections. Currently, this capability is primarily in the hands of state actors."
Indeed, US intelligence agencies have sounded the alarm that Russia, Iran, and China will leverage GenAI to undermine electoral integrity. The Cybersecurity and Infrastructure Security Agency (CISA) has also advised avoiding AI-related scams before and during Election Day.
A month ago, Microsoft unveiled evidence that Russian trolls linked to the Kremlin had disseminated two deepfake videos, garnering millions of views, aimed at undermining Harris's campaign. This came even as Russian President Vladimir Putin publicly expressed a preference for the Democratic candidate. One video featured a young woman in a wheelchair recounting a hit-and-run accident allegedly involving Harris in 2011. Fact-checkers discovered that the accident report came from a non-existent TV station, whose website was hastily created just before the fake video's distribution. The supposed victim was revealed to be an actress who was paid for the performance. "Russian actors will ramp up their efforts to spread divisive political content, staged videos, and AI propaganda," Microsoft cautioned.

Chinese operatives are also distributing fabricated video content, aiming to sow division and erode trust in the democratic process. Microsoft's cybersecurity team identified a Beijing-linked hacker group that disseminated anti-Biden administration and anti-Harris campaign videos before vanishing from the web. Groups associated with China are spreading content designed to damage both political camps, masquerading as Trump supporters and progressive organizations alike.
Ultimately, it's unclear whether AI-generated content will significantly sway voter decisions. Mainstream media outlets across the political spectrum have largely refrained from amplifying these fakes. On social media platforms, there are typically enough savvy users to flag suspicious images and neutralize their impact. Nevertheless, in an era of ubiquitous networks and sophisticated fakes, vigilance is paramount. "My advice is to always approach images on social networks with skepticism and verify the source," Cohen concludes.