Could AI and Deepfakes Sway the US Election?

Could Ai And Deepfakes Sway The Us Election?

A few months ago, everyone was worried about how AI would impact the 2024 election. It seems like some of the angst has dissipated, but political deepfakes—including pornographic images and video—are still everywhere. Today on the show, WIRED reporters Vittoria Elliott and Will Knight talk about what has changed with AI and what we should worry about.

Leah Feiger is @LeahFeiger. Vittoria Elliott is @telliotter. Will Knight is @willknight. Or you can write to us at politicslab@WIRED.com. Be sure to subscribe to the WIRED Politics Lab newsletter here.

Mentioned this week:
OpenAI Is Testing Its Powers of Persuasion, by Will Knight
AI-Fakes Detection Is Failing Voters in the Global South, by Vittoria Elliott
2024 Is the Year of the Generative AI Election, by Vittoria Elliott

How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for WIRED Politics Lab. We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Leah Feiger: This is WIRED Politics Lab, a show about how tech is changing politics. I’m Leah Feiger, the senior politics editor at WIRED. A few months ago, a lot of people were concerned about how artificial intelligence might affect the 2024 US election. AI-generated images, audio, and video had just gotten so good and was so easy to make and spread. The WIRED politics team in our project tracking the use of AI in elections around the world actually called 2024 the year of the generative AI election. Lately, it seems like some of the panic around AI has subsided, but deepfakes of Kamala Harris, Joe Biden, Donald Trump, and other politicians and their supporters are everywhere. And as we’ll talk about today, legislation on political deepfakes, including AI-generated pornography, is really tricky. So with the election looming, what has changed, if anything, and how much should we really be worrying about AI? Joining me to talk about all of this are two of WIRED’s AI experts. We have politics reporter Vittoria Elliott—

Vittoria Elliott: Hi, Leah.

Leah Feiger: Hey, Tori. And from Cambridge, Massachusetts, senior writer Will Knight. Will, thank you so much for coming on. It’s your first time here.

Will Knight: Yep. Hello. Thank you for having me.

Leah Feiger: So let’s start with porn, if that’s OK. Tori, you have a big article out today all about US states tackling the issue of AI-generated porn. Tell us about it. How are people handling this?

Vittoria Elliott: It’s actually really piecemeal, and that’s because on a fundamental level, we don’t have national regulation on this. Congresswoman Alexandria Ocasio-Cortez, who was herself the target of nonconsensual deepfake porn this year, has introduced the Defiance Act, which would allow victims to sue the people who create and share nonconsensual deepfake porn as long as they can show that the images or videos were made nonconsensually. And then Senator Ted Cruz also has a bill called the Take It Down Act that would let people force platforms to remove those images and videos. But there hasn’t really been major movement on these in several months, and this issue has gotten a lot of attention, particularly because we’ve seen the spate of young people, of middle and high schoolers, using generative AI technology to bully their peers, to make explicit images and videos of their peers. And we obviously have data that shows that while generative AI is maybe still being used in politics, and we definitely have a ton of examples where it is, mostly it’s used to target and harass and intimidate women.