Building rules in public: Our approach to synthetic & manipulated media

By and
Tuesday, 4 February 2020

It’s our responsibility to create rules on Twitter that are fair and set clear expectations for everyone on our service. That’s why we announced our plan last fall to once again seek input from around the globe on how we will address synthetic and manipulated media. Today, we’re sharing what we learned and how it shaped the update to The Twitter Rules, how we’ll treat this content when we identify it, as well as something new you’ll see in Twitter as part of this change.

What did we learn?
Through a survey on our initial draft of this rule, as well as Tweets that included the hashtag #TwitterPolicyFeedback, we gathered more than 6,500 responses from people around the world. We also consulted with a diverse, global group of civil society and academic experts on our draft approach. Overall, people recognize the threat that misleading altered media poses and want Twitter to do something about it. Here are some of the top-line findings:

  • Twitter should give me more information: Globally, more than 70 percent of people who use Twitter said “taking no action” on misleading altered media would be unacceptable. Respondents were nearly unanimous in their support for Twitter providing additional information or context on Tweets that have this type of media.
  • This type of content should be labeled: Nearly 9 out of 10 individuals said placing warning labels next to significantly altered content would be acceptable. That is about as many who said it would be acceptable to alert people before they Tweet misleading altered media.

    Compared to placing warning labels, respondents were somewhat less supportive of removing or hiding Tweets that contained misleading altered media. For example, 55 percent of those surveyed in the US said it would be acceptable to remove all of such media. When asked to give their open-ended thoughts about the proposed rule, people who opposed removal of all altered media talked about the impact on free expression and censorship.
  • If it is likely to cause harm, it should be removed: More than 90 percent of people who shared feedback support Twitter removing this content when it’s clear that it is intended to cause certain types of harm.
  • There should be enforcement action when sharing this content: More than 75 percent of people believe accounts that share misleading altered media should face enforcement action. Enforcement actions could include people on Twitter having to delete their Tweet or having their account suspended.
This post is unavailable
This post is unavailable.

What’s the new rule?
You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.

We’ll use the following criteria to consider Tweets and media for labeling or removal under this rule:

  1. Are the media synthetic or manipulated?

    In determining whether media have been significantly and deceptively altered or fabricated, some factors we consider include: 

    —Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing;
    —Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed; and
    —Whether media depicting a real person has been fabricated or simulated.
  2. Are the media shared in a deceptive manner?

    We’ll also consider whether the context in which media are shared could result in confusion or misunderstanding or suggests a deliberate intent to deceive people about the nature or origin of the content, for example by falsely claiming that it depicts reality.

    We also assess the context provided alongside media, for example:

    —The text of the Tweet accompanying or within the media
    —Metadata associated with the media 
    —Information on the profile of the person sharing the media
    —Websites linked in the profile of the person sharing the media, or in the Tweet sharing the media
  3. Is the content likely to impact public safety or cause serious harm?

    Tweets that share synthetic and manipulated media are subject to removal under this policy if they are likely to cause harm. Some specific harms we consider include:

    —Threats to the physical safety of a person or group
    —Risk of mass violence or widespread civil unrest
    —Threats to the privacy or ability of a person or group to freely express themselves or participate in civic events, such as: stalking or unwanted and obsessive attention;targeted content that includes tropes, epithets, or material that aims to silence someone; voter suppression or intimidation
This post is unavailable
This post is unavailable.

What will this look like on Twitter?
If we believe that media shared in a Tweet have been significantly and deceptively altered or fabricated, we will provide additional context on the Tweet. This means we may:

  • Apply a label to the Tweet;
  • Show a warning to people before they Retweet or like the Tweet;
  • Reduce the visibility of the Tweet on Twitter and/or prevent it from being recommended; and/or
  • Provide additional explanations or clarifications, as available, such as a landing page with more context.

In most cases, we will take all of the above actions on Tweets we label.

This post is unavailable
This post is unavailable.
This post is unavailable.

Our teams will start labeling Tweets with this type of media on March 5, 2020.

This will be a challenge and we will make errors along the way — we appreciate the patience. However, we’re committed to doing this right. Updating our rules in public and with democratic participation will continue to be core to our approach.

We’re working to serve the public conversation, and doing our work openly and with the people who use our service.

This post is unavailable
This post is unavailable.