Ask an expert about escapingand you’ll get a bleak response: You can’t.
“My No. 1 answer if you want to avoid misinformation on social media is, Don’t use social media. Because it’s just chock full of it,” said Cameron Hickey, a technology manager at the Information Disorder Lab at Harvard’s Shorenstein Center. “It’s a real point to make; it’s not just a joke.”
Since the US presidential election in 2016, tech giants like Facebook, Twitter and Google’s YouTube have elicited a critical backlash for their failures to stanch political misinformation. They’ve changed their algorithms; purged bots and manipulative accounts; and introduced fact-checking systems. Many of the efforts have aimed to change the quality of online discourse before Tuesday’s midterms, the first national US election since 2016, and a key test of how far these platforms have come.
So far, the answer seems to be: not far enough.
And it may get worse before it gets better. Deep fakes, aka videos manipulated with machine learning to turn almost anybody into an audio-visual puppet, haven’t surfaced meaningfully leading up to the US midterms, according to Hany Farid, a computer science professor at Dartmouth College. But accusations of deep fake videos have cropped up in elections in other parts of the world, most recently in Brazil, he said.
“It’s just a matter of time before we see it here,” Farid said. “It’s going to happen.”
Think it through
So what’s a mere mortal to do?
I asked Hickey how an average internet user can be savvier about political information online. At the Information Disorder Lab, Hickey develops technologies to identify, analyze and report online misinformation, but many of the tools and techniques he uses don’t require cutting-edge software or a Ph.D.
Chief among them is simply stepping back and thinking critically about what you’re seeing.
“Start by recognizing if this is something that’s trying to play on my emotions,” he said. “Is this something that’s trying to exploit my existing biases? If I recognize that, I can then think more critically about it.”
During this midterm election, Hickey and his colleagues have noticed a trend of false allegations about a candidate’s relationship with Islam, involving inaccurately associating the candidate with jihad or Sharia law. In other campaigns, they’ve picked up on a theme of spinning a benign action by a candidate as something sinister.
In these instances, he says to think about intent. If the intent of the message is to convince you of something, there’s a higher likelihood that you should be skeptical.
Vet those images
Beyond healthy skepticism, Hickey recommended search tricks for figuring out what news or images have been put in a deceptive context.
One example of false context — one that’s been rife leading up to this midterm election — is viral images purporting to show migrant caravans in Central America or the violence they’ve wrought. A simple way to verify whether these images are legitimate is a reverse-image search. The simplest may be to right-click on an image while using Google’s Chrome browser; Chrome will give you the option to “Search Google for Image.” But Google lets you reverse-image search in a number of ways.
If that image is manipulative and already viral, your results will be packed with fact-checking organizations debunking it.
And even if the image hasn’t been verified or disproven, you can apply a custom time range to your search to exclude instances during the latest election cycle. Typically, this can help you determine if a picture was actually taken years earlier, capturing something totally unrelated to what it’s said to represent.
Customizing your search’s range of time can help you debunk other instances of false context. Hickey gave an example of a misleading tweet that made the rounds during congressional investigations into claims that Brett Kavanaugh, then a Supreme Court nominee, committed sexual assault. As part of a backlash to those allegations, a prominent Twitter user shared details about a man who committed suicide after a false claim.
“That message was conveyed as if it just happened, but it really happened in 2016,” Hickey said, calling it recycled content with false context. “That’s a really simple thing, like looking up dates.”
An underlying problem with information on social media is that platforms tend to “flatten out” content and put it in “all the same-size boxes.” That makes it harder to gauge the quality of the source of that headline you scroll past. People judge the value of an information source implicitly all the time in their personal networks, Hickey said, but they aren’t as adept at applying that judgement to socially shared news.
“You take something differently if your grandma shared it than if your old college classmate or professor shared it,” he said. “We don’t necessarily do that with the source of content being shared quite a lot.”
Which brings the expert advice back to what Farid, the Dartmouth professor, called “good old-fashioned due diligence.” Be more careful about what you’re reading, check the sources and slow down your clicks and shares, he said.
“And stop getting your news from Facebook,” he said. “It’s a ridiculous place to get news.”
The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.
Apple: See what’s up with the tech giant as it readies new iPhones and more.