Twitter is implementing a new ‘crisis misinformation’ policy to address’situations of armed conflict, public health emergencies, and large-scale natural disasters,’ according to Yoel Roth, Twitter’s Head of Safety & Integrity, in a blog post.
The new policy statement comes as Twitter is in the process of acquiring Tesla CEO Elon Musk, who has made his opinions on ‘content filtering’ known through different tweets and postings. Musk has also stated that the contract with Twitter cannot be finalised until the platform confirms the number of bots or false users; Twitter claims 5%, which Musk does not believe.
‘Crisis’ is defined by Twitter as “situations in which there is a widespread threat to life, physical safety, health, or fundamental necessities subsistence.’ It goes on to say that it will rely on’verification from numerous reputable, publically available sources, including information from conflict monitoring groups, humanitarian organisations, open-source investigators, journalists, and more’ to evaluate whether a claim is deceptive.
According to the blog, this will be a global policy that would ‘assist to guarantee viral misinformation isn’t reinforced or promoted’ by the platform during emergencies. According to the post, as soon as Twitter receives information that a claim is’misleading, we will not amplify or recommend’ this content throughout the platform.
This includes displaying it on the app’s or website’s Home timeline, Search, and Explore sections. Twitter will also prioritise posting warning notifications to highly visible tweets and tweets from high-profile accounts, such as state-affiliated media accounts,’ according to the company,’ which contain such false information.
Tweets containing content that violates the crisis misinformation policy will be accompanied by a warning notice that states, ‘This Tweet breached the Twitter Rules on spreading false or misleading information that may cause harm to crisis-affected communities.’ However, Twitter has determined that this Tweet should stay available for accountability purposes.’
To be clear, Twitter will not remove potentially incorrect information; rather, it will limit its reach.
According to the blog post, examples of content that feature the Twitter warning for fraudulent or misleading content include:
So, what happens when Twitter adds a disclaimer to a piece of false information? After clicking through the warning window, users will still be able to see. The information, however, will not be ‘amplified or recommended across the service.’ Furthermore, Twitter will remove the ability to like, retweet, or share that specific piece of material.
‘We’ve seen that not amplifying or recommending certain content, adding context through labels, and, in extreme circumstances, limiting engagement with the Tweets are effective approaches to mitigate harm while still preserving speech and records of significant global events,’ the blog post continues.
The first version of this policy focuses on international armed conflict, beginning with the war in Ukraine, and Twitter intends to ‘update and expand the policy to encompass more types of crisis.’ ‘The policy will augment our existing work in other worldwide crises, including as Afghanistan, Ethiopia, and India,’ the corporation stated.