Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video, and audio that were realistic enough to fool voters and perhaps sway an election.
Follow Israel Hayom on Facebook, Twitter, and Instagram
The synthetic images that emerged were often crude, unconvincing, and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.
No more.
Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos, and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low. The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts, or videos, but it also could be used to mislead voters, impersonate candidates, and undermine elections on a scale and at a speed not yet seen.
"We're not prepared for this," warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. "To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it's going to have a major impact."
Some possible scenarios include automated robocall messages, in a candidate's voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.
"What happens if an international entity – a cybercriminal or a nation state – impersonates someone? What is the impact? Do we have any recourse?" Petko Stoyanov, global chief technology officer at Forcepoint said. "We're going to see a lot more misinformation from international sources."
Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact. Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.