Social media companies like Facebook, YouTube and Twitter weren’t anywhere near aon Thursday, but they were still the stars of the show. The hearing, which focused on how manipulated media like deepfakes could threaten democracy, repeatedly harped on the social networks’ role in the threat, with experts suggesting policies sure to give those companies pause.
One crucial suggestion? Experts said social-media companies must work together to develop shared policies about what manipulated media should stay up and what must come down. Another — maybe even scarier — prospect for the likes of Facebook and YouTube? Experts recommended walking back the companies’ legal immunity from the content their users post.
“The internet is not in its infancy. It shouldn’t be a free pass,” Danielle Citron, a law professor at the University of Maryland School of Law, told the committee Thursday.
Like Photoshop on sterioids, deepfakes are video forgeries powered by artificial intelligence that can make people appear to be doing or saying things they never did. Digital manipulation of video is nothing new, but deepfake tools now mean manipulated clips are both easier to make and increasingly hard to detect as fraud.
The US House Intelligence committee questioned experts Thursday morning about such manipulated media and how it can threaten national security, society and democracy. The committee chair, Democrat Rep. Adam Schiff, called deepfakes a “nightmarish” scenario for the 2020 presidential elections, with voters potentially “struggling to discern what is real and what is fake.”
Devin Nunes, the committee’s ranking Republican member, brought out a standard refrain from conservatives about Silicon Valley: the allegation that tech giants are suppressing voices on the right.
Experts on the House’s panel argued that a recent deepfake of Facebook CEOwas a positive development for the public’s understanding of these kinds of sophisticated forgeries. In the video, a digital puppet of Zuckerberg says he can control the future thanks to the power of Facebook’s data.
“Nobody really believes Mark Zuckerberg can control the future, because he surely wouldn’t want to show up to testify here or anywhere else, or be in the quagmire he’s in,” said Clint Watts, a fellow at the think tank Foreign Policy Research Institute and the bipartisan Alliance for Securing Democracy. The deepfake, which was posted to Facebook platforms specifically to test the company’s policies about removing manipulated media, was an exercise in how the context of a deepfake matters in regard to how much of a threat a clip poses.
In the case of the Zuckerberg deepfake, the fact that the video made such ridiculous claims worked in its favor for being allowed to remain up, Watts said.
In response to a request for comment on the House hearing, Facebook said that “combating misinformation is one of the most important things” the company can do leading up to the 2020 election. “We continue to look at how we can improve our approach and the systems we’ve built. Part of that includes getting outside feedback from academics, experts and policymakers,” the company said in a statement.
Twitter said it’s looking at how it may take action through both policy and product on these types of issues in the future. For now its policies are under close review, the company added in a statement.
YouTube didn’t respond to a message seeking comment.