In algorithms we trust? Twitter draws criticism over potential new ‘misinformation warning’ system
Twitter is experimenting with a new system that would flag messages it deems problematic, even if not factually incorrect, sparking concern that the platform is taking content-screening too far.
Tech blogger Jane Manchun Wong, who is known for reverse engineering apps to find hidden features, revealed on Monday that she had come across a tiered warning label system that Twitter is toying with, apparently in an effort to expand its crackdown on ‘misinformation’.
According to Wong, Twitter could potentially place problematic material into three categories: “Get the latest,” “Stay informed,” and “Misleading.” The new system appears to take a more nuanced approach to fact-checking, employing labels on content that may not be wrong but, in Twitter’s opinion, requires more context.
As an example of how the labels might be employed, Wong created three separate tweets. Her first message, “Snorted 60 grams of dihydrogen monoxide and I’m not feeling so well now,” was countered with a “Get the latest” label that offered more information about water.
Twitter is working on three levels of misinformation warning labels:“Get the latest”, “Stay Informed” and “Misleading” pic.twitter.com/0RdmMsRAEk
— Jane Manchun Wong (@wongmjane) May 31, 2021
In a second post she wrote: “In 12 hours, darkness will ascend in parts of the world. Stay tuned,” triggering a “Stay informed” label that provided a link to information about time zones. A third tweet, “We eat. Turtles eat. Therefore we are turtles,” resulted in a “Misleading” label, tagging the content as a “logical fallacy.”
Wong explained that while the labels were real, she added her own text below them in order to demonstrate how the system might respond to alleged misinformation.
A Twitter employee confirmed that the labels were genuine, describing them as “early experiments” as the company continues to target misinformation.
👀 some early experiments with new design treatments for our labels on misinformation. Let us know what you think, and how we can improve. (cc @tapatinah) https://t.co/BLXVDAhox7
— Yoel Roth (@yoyoel) May 31, 2021
It’s unclear whether the tiered system will actually go live, or to what extent it would actually be used if implemented. Wong has broken several stories related to Twitter, including the roll-out of its ‘tip jar’ feature, as well as its plan to introduce a new paid subscription service, Twitter Blue.
While some applauded the experimental system as a step in the right direction, there was considerable consternation about whether Twitter was becoming overzealous with its efforts to police content.
Many wanted to know how the labels would be assigned, arguing that Twitter needed to be transparent, especially if it plans to use an automated, algorithm-based screening process. There were also questions about where the links providing more context or information would actually lead, and who would be responsible for curating what Twitter considers the ‘truth’.
Whatever the number of flags and their meaning, I think the most important thinkg to ask is that : how the tweets will be flagged ? No transparent algorithm (at least to some external & independant people as researchers) will be a red warning for me.@Twitter, please be clear.
— Florent 💉💉 🏴☠️#VotePirate (@f_to_k) May 31, 2021
Other commenters wondered how such a system would work in instances where the author is clearly joking or being sarcastic.
The potential to abuse labels to crack down on undesirable speech should also not be overlooked, noted one reply, arguing that the initiative was “just another step into deep censorship.”
Who will make the decision? twitter or approve third party? Cmon just another step into deep censorship.
— Cesare Camboni 🇮🇹 (@CesareCamboni) May 31, 2021
A similarly critical comment said that Twitter was trying to “play God” by deciding what is true or not.
Like other social media platforms, Twitter has taken aggressive steps to flag or weed out content that it considers harmful or misleading. Most of the initiatives stemmed from allegations that social media was being manipulated to influence the 2020 US presidential contest. However, actions have also been taken to identify and remove “hate speech” and alleged misinformation about Covid-19.
But Twitter’s algorithms are far from omnipotent. The company recently faced criticism after it deleted posts mentioning the planned eviction of Palestinian families from East Jerusalem, an error that Twitter blamed on its “automated systems.” A similar issue occurred on Instagram.
Also on rt.com Twitter lists paid subscription service on app store, rekindling debate about whether new features are worth paying forLike this story? Share it with a friend!