Editorial note: This blog was first posted on July 9, 2019, and last updated December 13, 2021, to reflect additional changes made to our rules against hateful conduct.
We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk.
As we develop the Twitter Rules in response to changing behaviors and challenges with serving the public conversation, we understand the importance of considering a global perspective and thinking about how policies may impact different communities and cultures. Since 2019, we’ve prioritized feedback from the public, external experts, and our own teams to inform the continued development of our hateful conduct policy.
Expanding our hateful conduct policy
While we encourage people to express themselves freely on Twitter, abuse, harassment and hateful conduct continue to have no place on our service. As part of our work to make Twitter safe, our hateful conduct policy covers all protected categories. This means that we prohibit language that dehumanizes others on the basis of religion, caste, age, disability, disease, race, ethnicity, national origin, gender, gender identity, or sexual orientation.
We will require Tweets like those below to be removed from Twitter when they are reported to us. We will also continue to surface potentially violative content through proactive detection and automation. If an account repeatedly breaks the Twitter Rules, we may temporarily lock or suspend the account; more on our range of enforcement options here.
The Twitter Rules help set expectations for everyone on the service and are updated to keep up with evolving online behaviors, speech, and experiences we observe. In addition to applying our iterative and research-driven approach to the expansion of the Twitter Rules, we’ve reviewed and incorporated public feedback to ensure we consider a wide range of perspectives.
With each update to this policy, we’ve sought to expand our understanding of cultural nuances and ensure we are able to enforce our rules consistently. We’ve benefited from feedback from various communities and cultures who use Twitter around the globe. Consistent feedback we’ve received includes:
That said, even with these improvements, we recognize we will still make mistakes. We are committed to further strengthening both our enforcement processes and our appeals processes to correct our mistakes and prevent similar ones moving forward.
Our trusted partners
We realize that we don’t have all the answers, so in addition to public feedback, we work in partnership with our Trust & Safety Council as well as other organizations around the world with deep subject matter expertise in this area.
As we’ve expanded this policy, we’ve collaborated with civil society, academics and third-party experts to help us think about how we could appropriately address dehumanizing speech around these complex categories. For example, we've worked with partners to help us better understand the challenges we would face and to ultimately answer questions like:
"Our work in local communities helps us think critically about ways to ensure social cohesion between the diverse communities we serve, institutions, and policymakers. As a trusted partner, we worked with Twitter to ensure that cultural and regional nuances relevant to migrant groups were accounted for in their hateful conduct policy update." — Roses of Peace, an interfaith organization that aims to build a cohesive and resilient Singapore
We’ll continue to build Twitter for the global community it serves and ensure your voices help shape our rules and how we work. As we continue to look for opportunities to evolve and expand our policies to better handle the challenges we’re currently facing, we’ll update you on what we learn and how we plan to address it. We’ll also continue to provide regular updates on all of the other work we’re doing to make Twitter a safer place for everyone via @TwitterSafety.
*Examples of research on the link between dehumanizing language and offline harm:
Did someone say … cookies?
X and its partners use cookies to provide you with a better, safer and
faster service and to support our business. Some cookies are necessary to use
our services, improve our services, and make sure they work properly.
Show more about your choices.