There’s a shadow of a doubt.
On Thursday morning, President Donald Trump called out Twitter, accusing the social network of shadow banning prominent Republicans. The reaction came after Vice News reported that Twitter wasn’t autopopulating Republicans in its drop-down search box.
But that’s not shadow banning — it’s a bug, according to Twitter.
“We are aware that some accounts are not automatically populating in our search box, and [we’re] shipping a change to address this,” a Twitter spokesperson said. “The profiles, tweets and discussions about these accounts do appear when you search for them. To be clear, our behavioral ranking doesn’t make judgments based on political views or the substance of tweets.”
Thursday’s presidential backlash against Twitter is the latest in a series of accusations lawmakers have made regarding social networks and censorship. The House Judiciary Committee has had two hearings on the subject, in July and April, with Republican lawmakers asking representatives from Twitter, Google and Facebook if the platforms were purposely silencing conservative voices.
The subject has come up before. In January during a Senate hearing Sen. Ted Cruz, a Republican from Texas, asked Twitter’s policy director, Carlos Monje, if the social network practices shadow banning. Monje said no, and Twitter has said at multiple hearings on Capitol Hill that it doesn’t shadow ban.
Most recently, during a hearing on July 18, Twitter’s global lead for public policy strategy, Nick Pickles, told lawmakers, “Some critics have described the sum of all of this work as a banning of conservative voices. Let me make clear to the committee today that these claims are unfounded and false.”
What is shadow banning?
Shadow banning isn’t a new concept; it’s frequently used in forums and on other social networks as an alternative to banning someone outright.
Instead of kicking someone off, shadow bans make a person’s post visible only to the user who created it. The idea is to protect others from harmful content while eventually prompting the shadow-banned user to voluntarily leave the forum due to a lack of engagement.
If you outright ban a user, the thinking goes, the person is aware of it and will likely just set up another account and continue the offending behavior.
Shadow banning was Reddit’s only form of banning for years and was used by the site until November 2015.
The practice is similar to what Facebook does with misinformation. The social network told reporters on July 11 that instead of completely banning pages behind hoaxes and misinformation, it would rather demote their posts so fewer people see them.
Shadow banning is typically used to stop bots and trolls, said Zack Allen, director of threat operations at ZeroFox, a company that focuses on social media security.
“This can be effective in combating bots where ‘bot herders’ who maintain these accounts don’t necessarily know whether or not their bots are actually being seen by other people,” he said.
Is what’s happening on Twitter shadow banning?
You can still see posts from the Republicans named in the Vice News article, including Republican Party Chairwoman Ronna McDaniel and Rep. Matt Gaetz, a Republican from Florida.
The White House, McDaniel and Gaetz didn’t respond to a request for comment.
Your Twitter account may not autopopulate in searches, but that doesn’t mean you’ve been shadow banned.
Twitter’s moderators are not actively taking measures against accounts and blocking them so that only these users can see their own tweets, the company says.
The search results bug involves an error with Twitter’s algorithm, the social network’s head of product, Kayvon Beykpour, said in a series of tweets Wednesday.
Twitter’s behavior signals caused the mistakes with autosuggestions, Beykpour explained.
“Our usage of the behavior signals within search was causing this to happen & making search results seem inaccurate,” he said in a tweet Wednesday. “We’re making a change today that will improve this.”
Twitter’s product manager for health, David Gasca, talked to Techhnews about these signals earlier in July. They could include how often an account is muted, blocked, reported, retweeted, liked and replied to. Twitter’s algorithm takes interactions into consideration, and its artificial intelligence classifies them as either positive or negative experiences.
As part of Twitter’s push to create healthy conversations, its AI will favor accounts that’ve had more positive experiences.
First published July 26, 9:07 a.m. PT
Update, 9:42 a.m.: Adds remarks from a security specialist.
Cambridge Analytica: Everything you need to know about Facebook’s data mining scandal.
Techhnews Magazine: Check out a sample of the stories in Techhnews’s newsstand edition.