Twitter has updated its rules again in order to minimize the risks of offline harm. Twitter claims that these new changes in Twitter Rules are in response to changing behaviors and challenges with serving the public conversation. In the Twitter blog post, they clarified that the platform is about letting people express themselves freely online but abuse, harassment and hateful conduct continue to have no place on Twitter anymore.
In July 2019, Twitter expanded its rules against hateful conduct to include language that dehumanizes others on the basis of religion or caste. In March 2020, Twitter expanded the rule to include language that dehumanizes on the basis of age, disability, or disease. This new change in Twitter rules is about further expanding its hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.
https://twitter.com/TwitterSafety/status/1334197337044860929?s=20
Twitter will require Tweets like those below to be removed from Twitter when they are reported. The social media giant will also continue to surface potentially violative content through proactive detection and automation. If an account repeatedly breaks the Twitter Rules, it may temporarily lock or suspend the account.
Twitter New Rules to Addressing Hateful Conduct on Twitter
The Twitter Rules help set expectations for everyone on the service and are updated to keep up with evolving online behaviors, speech, and experiences it observes. In addition to applying its own iterative and research-driven approach to the expansion of the Twitter Rules, the social media giant has also reviewed and incorporated public feedback to ensure that it considers a wide range of perspectives.
Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. Twitter has incorporated this feedback when refining this rule, and also made sure that they provided additional detail and clarity across all its rules.
Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends, and followers in endearing terms, such as “kittens” and “monsters.”
Consistent enforcement — Many people raised concerns about Twitter's ability to enforce its rules fairly and consistently, so the platform has developed a longer, more in-depth training process with its teams to make sure they are better prepared when reviewing reports.
That said, even with these improvements, Twitter does recognize that they will still make mistakes. However, it does show that they are committed to continuing to work to further strengthen both its enforcement process and its appeals process to correct their mistakes and prevent similar ones moving forward.