Twitter is rolling out a new feature that prompts users to revise their tweet replies if the language in them can be considered offensive (the algorithm aims to detect “insults, strong language, or hateful remarks”).
Twitter has made several moves over the last year to improve safety on the platform and curb misinformation. What’s exciting, or at the least a cause for optimism, is that in experiments last year, Twitter says 34% of users who saw the prompt revised their initial replies or decided to not send the tweet at all. Then, after seeing the prompt, people wrote 11% fewer offensive replies.
For now, the feature will only be available to Twitter users who use the platform in English on both iOS and Android devices. It stops short of preventing someone from sending an offensive or harmful reply all together.
“We’ll continue to explore how prompts — such as reply prompts and article prompts — and other forms of intervention can encourage healthier conversations on Twitter,” the official announcement said. “Our teams will also collect feedback from people on Twitter who have received reply prompts as we expand this feature to other languages.”
Twitter will give users the chance to "review" a tweet before sending, if they think it's a potentially harmful or offensive reply.
So now we have "Don't you want to read the article before RTing?" and "Have you considered … not being a jerk?"https://t.co/padii1Cgv0
— Sarah Scire (@SarahScire) May 5, 2021
From our friends at Twitter: Twitter is rolling out "reply prompts" across iOS and Android. These prompts will encourage people to pause and think before they tweet an insulting or rude reply.
A much-needed feature 🥰🎉https://t.co/fJ36ejc9NX pic.twitter.com/oETFWkMNq7
— Koromone Koroye (@Koromone_K) May 6, 2021
I feel so validated. Priming WORKS! Spent more time on designing for Safety and less time on whack-a-mole. https://t.co/f55BRDChJZ pic.twitter.com/49nZZKHmfC
— Evan Hamilton (@evanhamilton) May 5, 2021
Leave a comment