Twitter's ‘secret blacklists’ exposed

9 Dec, 2022 02:23 / Updated 2 years ago
A new investigation details the practice of “shadow bans,” known internally as “visibility filtering”

Twitter has created a series of barriers and tools for moderators to prevent specific tweets and topics from trending, or limit the visibility of entire accounts without users’ knowledge, according to internal correspondence and interviews with multiple high-level sources within the company.

Despite repeated public assurances by top Twitter officials that the company does not “shadow ban” users, especially not “based on political viewpoints or ideology,” the practice actually existed under the euphemism of “visibility filtering,” according to journalist Bari Weiss, who published the second installment of the so-called Twitter Files in a lengthy thread on Thursday night.

“Think about visibility filtering as being a way for us to suppress what people see to different levels. It’s a very powerful tool,” one senior Twitter employee said, while another admitted that “normal people do not know how much we do.”

Twitter moderators have the power to add the user to categories such as “Trends Blacklist,” “Search Blacklist” and “Do Not Amplify,” to limit the scope of a particular tweet or entire account’s discoverability – all without users’ knowledge or any warning.

Weiss noted that the tools were even used to limit the reach of academics, including Stanford University’s Dr. Jay Bhattacharya, who courted controversy after challenging the effectiveness of Covid-19 lockdowns and other pandemic mandates. He ended up on Twitter's “Trends Blacklist,” keeping his posts out of the site’s trending section, the documents show.

However, above the common moderators was another “secret group” that handled issues concerning “high follower,” “controversial” and other notable users. Known as “Site Integrity Policy, Policy Escalation Support,” the team included high-level executives such as former Head of Legal, Policy, and Trust, Vijaya Gadde, the Global Head of Trust and Safety, Yoel Roth and CEOs Jack Dorsey and Parag Agrawal.

In one notable example, the top-level moderation team was involved in decisions to repeatedly suspend the Libs of TikTok account, which routinely posted material mocking liberals and progressives, and has racked up more than 1.4 million followers. While the account was told it had violated Twitter’s policy against “hateful conduct,” an internal company memo circulated in October acknowledged that it had not “directly engaged in behavior violative of the Hateful Conduct policy” after all.

The ‘Site Integrity Policy’ group took up a new argument, instead saying the woman running the Libs of TikTok account, Chaya Raichik, “encouraged online harassment.” However, when Raichik’s personal information was exposed online, Twitter refused to take action, concluding that posts containing her home address and photos of her residence did not break any of the platform’s rules.

The release of the documents was endorsed by the company’s new CEO Elon Musk, who after taking over Twitter in October fired several top executives, including Gadde and Roth, and reversed some of Twitter’s previous decisions, such as permanently blocking former President Donald Trump’s account.

“We're just getting started on our reporting,” Weiss concluded, promising that the next installment of the Twitter Files will soon be published by journalist Matt Taibbi, who opened the series last week with revelations about a company-wide effort to suppress a damaging report about Joe Biden’s family.