Twitter testing feature to make it harder for users to ‘like’ things it doesn’t
Twitter has secretly begun testing a new feature designed to slow users down before they can ‘like’ a post, adding to a slew of new prompts and prods before people can interact on the platform.
The feature was first reported on Monday by Jane Manchun Wong, a Hong Kong-based software expert who reverse-engineers big tech apps to discover upcoming new features still in the development or testing phase.
Twitter is working on misinformation warning on Likes, just like the ones for Quote Tweets / Retweets pic.twitter.com/BLlmaw5RZK
— Jane Manchun Wong (@wongmjane) November 9, 2020
On attempting to ‘like’ a given controversial or potentially misleading tweet, users with the feature are challenged to “help keep Twitter a place for reliable info,” and “find out more” before being allowed to ‘like’ said tweet.
The feature doesn’t prevent people liking a tweet – it merely slows them down.
If introduced on the wider platform, the feature would be yet another in an array of new features unveiled by the tech giant in recent weeks that are aimed at content de-amplification, ostensibly to encourage critical thinking before information is shared.
These include a new prompt encouraging users to actually click through and read an article before retweeting it, in a bid to prevent the spread of clickbait headlines and ‘fake news’ across social media. However, time and time again, this type of intervention by social media companies has been weaponized by partisans on both sides of the aisle, with accusations of fake news and misinformation being rife.
New York Times tech reporter calls four accurate stories “right wing misinformation” 👀👀 https://t.co/n5r2ftnDA8
— Jon Levine (@LevineJonathan) November 10, 2020
On November 2, Election Day eve, Twitter applied the first of its warning labels to a Trump tweet discussing ballot deadline extensions in Pennsylvania potentially leading to “rampant and unchecked cheating” and “violence in the streets.”
We placed a warning on a Tweet from @realDonaldTrump for making a potentially misleading claim about an election. This action is in line with our Civic Integrity Policy. More here: https://t.co/k6OkjNXEAm
— Twitter Safety (@TwitterSafety) November 4, 2020
Weeks prior, on October 20, Twitter began prompting people to share quote tweets instead of simply retweeting, ramped up its efforts to label alleged misinformation, and confronted users with more aggressive warnings about tweets by political figures in the US in the lead-up to election day.
Also on rt.com Parler malfunctions as app favored by conservatives OVERWHELMED by new users fleeing censorship, ranks as most downloaded in US“Our goal is to give people the context and tools necessary to find credible information on our service – no matter the topic or where they are seeing the tweet,” a Twitter spokesperson said of the new features, confirming that the new ‘like’ button speed bump was still in development.
“This is an iterative process, and we’re continuing to explore features and policies to help people on Twitter make their own informed decisions,” they added.
Some argued that the new feature, including the slow-mo ‘like,’ would actually dissuade people from conducting their own independent research and merely promote acceptance of the tech company’s guidelines as doctrine.
tbh I'm not a really big fan of websites like Twitter telling users about misleading posts. It makes them even more reluctant to do research on their own and just believe whatever they're fed.
— Kenneth (@mrcreeper1008) November 9, 2020
Some expressed concern that Twitter was positioning itself as the “arbiter of truth.”
Meanwhile, others were apparently just sick of tech companies’ constant attempts to interfere with online discourse.
I'm actually extremely sick of this childish BS that they are doing. 😠
— James Wood 🚀 ☁️ (@jameswood12p) November 9, 2020
Think your friends would be interested? Share this story!