‘Trusted Friends’ and ‘hateful’ language filter: Twitter’s concept features to allow users to choose who & what they want to hear
Proposed features that would enable Twitter users to limit their audience to only “trusted friends” and choose phrases to be blacklisted have prompted accusations of encouraging “echo chambers” against the social media platform.
In a series of tweets on Thursday, designer Andrew Courter noted that Twitter is “exploring a bunch of ways to control who can see your Tweets.” He shared three such early design concept features to solicit public discussion and feedback, but pointed out that the company is “not building these yet.”
With the ‘Trusted Friends’ feature, comparable to Instagram’s ‘Close Friends’ functionality for its stories, users can “control who can see” their tweets – potentially toggling privacy settings to tailor their audience according to what they put out.
Reasoning that it “could be simpler to talk to who you want, when you want” instead of “juggling alt accounts,” Courter tweeted that “perhaps (users) could also see trusted friends’ tweets first” in their timelines – as opposed to the current algorithm-determined and chronologically-ordered ‘Home’.
We hear y'all, toggling your Tweets from public to protected, juggling alt accounts. It could be simpler to talk to who you want, when you want.With Trusted Friends, you could Tweet to a group of your choosing. Perhaps you could also see trusted friends' Tweets first. pic.twitter.com/YxBPkEESfo
— A Designer (@a_dsgnr) July 1, 2021
According to TechCrunch, it would build on already existing controls that let original posters pick who is able to ‘reply’ to their tweets – those mentioned in the tweet, people they follow, or the default option, ‘everyone’. However, that feature left the actual tweet visible and shareable by anyone.
The second proposed change, under the working name ‘Facets’, would allow people to categorize tweets according to context by “embracing an obvious truth: we’re different people in different contexts” with respect to friends, family, work, and public lives.
Also on rt.com Friends? What friends? Instagram tests expansion of ‘suggested posts’ feature telling users what they REALLY want to seeAccording to Courter, this concept lets people tweet “from distinct personas within 1 account,” while enabling other individuals to “follow the whole account” or just the ‘facets’ they find interesting. For instance, a personal persona could relate to hobbies, while a professional persona is work-related.
Meanwhile, the third feature would allow users to filter out phrases deemed to be “hateful, hurtful and violent” or considered “profanity” that they would “prefer not to see” in replies to their tweets. They can also choose “automatic actions” like “moving violating replies to the bottom of the conversation” and “muting accounts that violate twice” despite the prompts.
Here’s how it’d work:• Authors choose the phrases they prefer not to see• These phrases are highlighted as ppl write replies; ppl can learn why, or ignore the guidance• Authors can enable automatic actions, like moving violating replies to the bottom of the convo pic.twitter.com/VzxU6D7eMf
— A Designer (@a_dsgnr) July 1, 2021
Followers would then see these phrases “highlighted” in their replies and a prompt nudges them to “learn why” or they can just “ignore the guidance,” according to Courter. Likening it to a “spellcheck” against “sounding like a jerk,” he noted that it could help “set boundaries” for conversations.
The proposals drew a mixed reaction from the platform’s users, with several people raising concerns that the prospect of tweeters picking and choosing their audiences and the replies they would prefer to receive increased the likelihood of “echo chambers” and “virtue signalling.”
The risk of creating echo chambers is very high with this so what about moving these blocked replies next to the spot of hidden replies, so users who want to see what is getting hidden can easily make their own view of the situation and still get a somewhat neutral reply section
— BurgerLuis (@LuisEatsBurgers) July 1, 2021
We already have keyword muting. This would serve no other purpose than virtue signaling. I would use Twitter less if I saw these warnings popping up all over the place on random terms.
— Chris Blec (@ChrisBlec) July 1, 2021
“Twitter is a public forum. It’s what makes it different. Close the communities enough, and it gets turned into a Facebook clone. One Facebook style social network is definitely enough,” one person tweeted.
When some users pointed out that the features would “block accounts,” Courter responded that “blocks are underused” and claimed there is a “need to normalize blocking and teach how it works.”
In response to Courter’s contention that the reply filters would “help people be their best selves,” a number of people agreed that it would “set a model for empathetic phrasing,” but others said it sounded like “another attempt to pressure users into ‘acceptable’ speech.”
Other users said the concept of tying different personas to one account had been explored previously by Google with their defunct ‘Circles+’ with one person saying it would lead to individual privacy concerns that are not present with the current workaround of using alternate accounts.
Also, if something like this is implemented, please still retain the ability to quickly switch accounts. While this is good within one identity, it makes connections between accounts public, which isn’t good if for example you like having a professional and a pseudonymous account
— CUNII⅋FORM (@cuniiform) July 2, 2021
If you like this story, share it with a friend!