Updating our rules against hateful conduct

Tuesday, 9 July 2019

Editorial note: This blog was first posted on July 9, 2019, and last updated December 13, 2021, to reflect additional changes made to our rules against hateful conduct.

We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk.

As we develop the Twitter Rules in response to changing behaviors and challenges with serving the public conversation, we understand the importance of considering a global perspective and thinking about how policies may impact different communities and cultures. Since 2019, we’ve prioritized feedback from the public, external experts, and our own teams to inform the continued development of our hateful conduct policy.

Expanding our hateful conduct policy 

While we encourage people to express themselves freely on Twitter, abuse, harassment and hateful conduct continue to have no place on our service. As part of our work to make Twitter safe, our hateful conduct policy covers all protected categories. This means that we prohibit language that dehumanizes others on the basis of religion, caste, age, disability, disease, race, ethnicity, national origin, gender, gender identity, or sexual orientation. 

We will require Tweets like those below to be removed from Twitter when they are reported to us. We will also continue to surface potentially violative content through proactive detection and automation. If an account repeatedly breaks the Twitter Rules, we may temporarily lock or suspend the account; more on our range of enforcement options here.

This post is unavailable
This post is unavailable.

Our approach to addressing hateful conduct on Twitter 

The Twitter Rules help set expectations for everyone on the service and are updated to keep up with evolving online behaviors, speech, and experiences we observe. In addition to applying our iterative and research-driven approach to the expansion of the Twitter Rules, we’ve reviewed and incorporated public feedback to ensure we consider a wide range of perspectives.    

With each update to this policy, we’ve sought to expand our understanding of cultural nuances and ensure we are able to enforce our rules consistently. We’ve benefited from feedback from various communities and cultures who use Twitter around the globe. Consistent feedback we’ve received includes: 

  • Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules
  • Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends, and followers in endearing terms, such as “kittens” and “monsters.” 
  • Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better prepared when reviewing a report.

That said, even with these improvements, we recognize we will still make mistakes. We are committed to further strengthening both our enforcement processes and our appeals processes to correct our mistakes and prevent similar ones moving forward.

Our trusted partners 

We realize that we don’t have all the answers, so in addition to public feedback, we work in partnership with our Trust & Safety Council as well as other organizations around the world with deep subject matter expertise in this area.  

As we’ve expanded this policy, we’ve collaborated with civil society, academics and third-party experts to help us think about how we could appropriately address dehumanizing speech around these complex categories. For example, we've worked with partners to help us better understand the challenges we would face and to ultimately answer questions like: 

  • How do we protect conversations people have within marginalized groups, including those using reclaimed terminology? 
  • How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?
  • How can — or should — we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?
  • How do we account for “power dynamics” that can come into play across different groups? 

"Our work in local communities helps us think critically about ways to ensure social cohesion between the diverse communities we serve, institutions, and policymakers. As a trusted partner, we worked with Twitter to ensure that cultural and regional nuances relevant to migrant groups were accounted for in their hateful conduct policy update." Roses of Peace, an interfaith organization that aims to build a cohesive and resilient Singapore

We’ll continue to build Twitter for the global community it serves and ensure your voices help shape our rules and how we work. As we continue to look for opportunities to evolve and expand our policies to better handle the challenges we’re currently facing, we’ll update you on what we learn and how we plan to address it. We’ll also continue to provide regular updates on all of the other work we’re doing to make Twitter a safer place for everyone via @TwitterSafety.

*Examples of research on the link between dehumanizing language and offline harm:

This post is unavailable
This post is unavailable.