Company

Maintaining the safety of X in times of conflict

By
Tuesday, 14 November 2023

January 7, 2024 Update: 

Safety and cybercrime:

Platform integrity and authenticity: 

  • Our escalations team has actioned over 30,000 pieces of content under our synthetic and manipulated media policy.
  • We’ve also taken action – including suspension – on over 781,000 accounts as a result of our proactive investigations to protect authentic conversation regarding the conflict, including coordinated/inauthentic engagement, inauthentic accounts, duplicate content and trending topic/hashtag spam. Additionally, we continue to investigate and disrupt coordinated campaigns to manipulate conversations related to the conflict.

-------------------------------------------------------------------------------------------

November 14, 2023

Over half a billion of the world’s most informed and influential people come to X each month to freely express themselves. In times of uncertainty such as the Israel-Hamas conflict, our responsibility to protect the public conversation is magnified.

From the onset of the conflict, we activated our crisis protocol and stood up the company to address the rapidly evolving situation with the highest level of urgency. That includes the formation of a cross-functional leadership team that has been working around the clock to ensure our global community has access to real-time information and to safeguard the platform for our users and partners.

As we enter the second month of the conflict, here’s an update on the key proactive measures we have taken:

Policy Enforcement

X has a comprehensive set of policies designed to promote and protect the public conversation. Below is a summary of the key enforcement actions we have taken based on those policies in response to the Israel-Hamas conflict. Enforcement actions can include restricting the reach of a post, removing the post or account suspension. Additionally, such posts are not eligible for monetization.

Safety and cybercrime:

  • To date, our escalations team has actioned over 325,000 pieces of content that violate our Terms of Service, including violent speech and hateful conduct.
  • We’ve taken action under our Violent and Hateful Entities policy to remove over 3,000 accounts by violent entities in the region, including Hamas, since the start of the conflict.
  • In parallel, as we outlined in our update on this topic in September, we have expanded our proactive measures to automatically remediate against antisemitic content and provided our agents worldwide with a refresher course on antisemitism.

Platform integrity and authenticity: 

  • Our escalations team has actioned over 25,000 pieces of content under our synthetic and manipulated media policy.
  • We’ve also taken action – including suspension – on over 375,000 accounts as a result of our proactive investigations to protect authentic conversation regarding the conflict, including coordinated/inauthentic engagement, inauthentic accounts, duplicate content and trending topic/hashtag spam. Additionally, we continue to investigate and disrupt coordinated campaigns to manipulate conversations related to the conflict.

Exceptions to our Sensitive Media policy:

  • We know that it's sometimes incredibly difficult to see certain content, especially in moments like the one unfolding. While we’ve always allowed media that meets our definitions of Graphic Content (so long as it is placed behind a sensitive media warning interstitial), we generally remove content that meets our definitions of Gratuitous Gore. However, when certain newsworthy events occur X believes that it's in the public's interest to understand what's happening in real-time, and we’ve allowed a range of media to remain on the platform (so long as it too is placed behind an interstitial). People on X can control what media they see and we recommend reviewing your media settings to ensure you either see or do not see this sensitive media, depending on your preferences.

 

Community Notes

A critical topic during times of conflict is misleading information – something that every publisher and platform is tackling. We believe it’s important to add context to potentially misleading content, and that a community-led approach is an effective solution. This led us to creating Community Notes, which addresses a far wider range of sophisticated media types than our historical approaches.

In the first month of the conflict, notes have been viewed well over a hundred million times, addressing topics from out-of-context videos to AI-generated media to claims about specific events. And when notes are added, they’re effective – they’re helpful to people across the political spectrum, they measurably inform people’s understanding and people choose to share potentially misleading content less (and often delete it). On average, people are 30% less likely to agree with the substance of the original post after reading a note about it.

We hold a high bar for both the quality of notes posted and the speed and scale at which they appear. Here are some improvements we’ve made in just the past month:

Notes are getting faster.

  • They are now visible 1.5 to 3.5 hrs more quickly than a month ago. Note previews directly on posts help contributors find and rate proposed notes faster.

Notes are appearing on far more posts.

  • We have north of 200,000 contributors in 44 countries and are growing. That includes over 40,000 new contributors added from the beginning of the conflict.
  • We recently launched and have been enhancing notes on new media types. When a note is added to a photo or video, it automatically shows on other posts with matching media. Because of this new feature, notes related to the Israel-Hamas conflict have been displayed on 10,000+ posts. 
  • This number grows automatically if the relevant images and video are re-used in new posts.
  • Additionally, we have launched an update making it easier to see the post on which any given note was originally written, enabling people to see the original context.

Notes regularly appear on highly engaged content.

  • This helps to inform many people about a topic, and also allows people to become familiar with the ways in which content can be misleading, so they can better consume future information.

We’re notifying more users after a Note has been added.

We are updating our policies to prevent posts with Community Notes from being monetized.

 

Brand Safety 

X has robust products and policies that we enforce 24/7 to provide brand safety for all advertisers. As part of this commitment, we do not monetize content that violates our policies.

In response to the Israel-Hamas conflict we have proactively removed more than 100 publisher videos not suitable for monetization. We’ve updated our keyword blocklists with more than 1,000 terms related to the conflict, preventing ad targeting and adjacency on Timeline or Search placements. And we’ve implemented a change where Amplify Video campaigns from our content partners will only run in the Home Timeline.

With many conversations happening on X right now, we have also shared guidance on how to manage brand activity during this moment through our suite of brand safety and suitability protections and through tighter targeting to suitable brand content like sports, music, business and gaming.

 

Our work is ongoing

We’ll continue to engage with communities, governments, nonprofits, customers and others who have constructive feedback and ideas to strengthen our approach. Protecting the public conversation and the safety of the platform is our priority.

 

This post is unavailable
This post is unavailable.