Serving the Public Conversation During Breaking Events

Thursday, 5 April 2018

People come to Twitter first to learn about news and events unfolding in real-time, and we’re committed to ensuring that the information they receive is credible and authentic. Whether it’s real-time rescue efforts of Hurricane Harvey survivors in Texas, capacity-building with Indian NGOs who aid flooded communities, verifying credible voices after major events, or sending prompts to French citizens in the wake of the November 2015 terrorist attacks in Paris, our goal is to provide support to people in times of crisis, and show people what matters most.

Over the past few months, we’ve refined our tools, improved the speed of our response, and identified areas where we can improve. In light of the horrific attack at YouTube headquarters this week, we’re sharing more detail on how we’re tackling an especially difficult and volatile challenge: our response to people who are deliberately manipulating the conversation on Twitter in the immediate aftermath of tragedies like this. In the 24 hours since this incident was first reported on Twitter there were more than 1.3 million Tweets about it, reflecting just how quickly these conversations unfold and the sheer volume of information that’s shared.

What Happened

When information from the shooting at YouTube HQ started to appear on Twitter, we saw credible and relevant information from individuals and news organizations. We also saw accounts deliberately sharing deceptive, malicious information, sometimes organizing on other networks to do so.

There are various policies we rely on in these types of situations, but we do not have a policy under which Twitter validates content authenticity or accuracy. As we’ve previously shared, we strongly believe Twitter should not be the arbiter of truth. However, we do see information shared on the service in a way that violates many of our existing policies, as happened in the immediate aftermath of the attack in San Bruno. During these types of situations, some of the ways we evaluate content include:

  • Is the content posted to harass or abuse another person, violating our rules on abusive behavior?
  • Is this meant to incite fear against a protected category as outlined in our hateful conduct policy?
  • Could misrepresenting someone in this way cause real-world harm to the person who is targeted per our rules on violent threats?
  • Is this account attempting to manipulate or disrupt the conversation and violating our rules against spam?
  • Can we detect if this account owner has been previously suspended? As outlined in our range of enforcement options, when someone is suspended from Twitter, the former account owner is not allowed to create new accounts.

Using these policies and enforcement options as our rationale for evaluating the conversation surrounding the YouTube shooting, we immediately started requiring account owners to remove Tweets — many within minutes of their initial creation — for violating our policies on abusive behavior. We also suspended hundreds of accounts for harassing others or purposely manipulating conversations about the event. Immediately following a crisis, we rapidly implement proactive, automated systems to prevent people who had been previously suspended from creating additional accounts to spam or harass others, and to help surface potentially violating Tweets and accounts to our team for review.

While we were removing Tweets and accounts that broke our rules, our team was also focused on identifying and surfacing relevant and credible content people could trust. Moments highlighting reliable information were available in 16 countries and in five different languages — many within 10 minutes of the first Tweets — and also surfaced within top trends related to the situation. Throughout the incident and the following day, we continued to surface updates as they became available.

This post is unavailable
This post is unavailable.
This post is unavailable.

Where We Go From Here

This work is ongoing. We are continuing to explore and invest in what more we can do with our technology, enforcement options, and policies – not just in the U.S., but to everyone we serve around the world. Initial ideas include better use of our technology to catch people working to evade a suspension and identifying malicious, automated accounts, and more quickly activating our team to ensure a human review element continues to be present within our all of automated processes.

We're committed to continuing to improve and to holding ourselves accountable as we work to make Twitter better for everyone. We’re looking forward to sharing more soon.

This post is unavailable
This post is unavailable.