Seeing is no longer believing.
Deepfakes – doctored videos that look alarmingly real thanks to artificial intelligence – are the latest trend in disinformation making a troubling appearance on social media platforms.
With developments in AI, or artificial intelligence, making the videos ever more convincing, experts say deepfakes pose the latest threat to election security in the U.S. and around the world.
With that threat in mind, the U.S. House Intelligence Committee took a deep dive into “deepfakes” Thursday.
Experts did little to set committee members’ minds at ease.
“There’s no easy solution and it’s likely to get much worse before it gets better,” said Dr. David Doermann of the University at Buffalo’s Artificial Intelligence Institute.
“Right now I would be very worried about someone making a fake video about electoral systems being out, or broken down on Election Day 2020 ,” said Clint Watts of the Foreign Policy Research Institute. “We should already be building a battle drill, a response plan, on how we’d handle that, in the government, in the state governments and in the DHS, as well as the social media companies.”
As the tech gets more sophisticated, detecting fakes becomes more difficult. and those with the skill to do it say they’re outgunned.
“There’s a lot more manipulators than there are detectors,” said Doermann.
And even after the video is deemed a fake, it can still influence public opinion.
The challenge for lawmakers and tech companies, experts say, is to straddle the line between censoring protected speech and preventing misinformation.
“I don’t think we’ll see any legislation around this,” said Alex Stamos, a cybersecurity analyst. “I think we do have to have a self regulatory model, and we should expect more from the companies.”