Twitter’s 13th biannual Transparency Report represents our ongoing commitment to increasing the availability of critical information and data on how we handle legal requests and other content issues.
While originally focused on requests submitted to Twitter by government actors, like court orders for information or content removal, over the years we have been working to expand the report to also include more detail about the actions we take when enforcing the Twitter Rules.
In this latest report, we have continued this evolution and are now including details about the enforcement of a number of key content policies. We have also added a new section covering platform manipulation, which you can read more about below and in the report itself.
Internet freedom and online expression
Internet freedom and online expression remain under significant pressure and constraint, a trend we have observed across recent reports. The latest Twitter Transparency Report shows that Twitter received approximately 80% more global legal demands, impacting more than twice as many accounts compared to the previous reporting period. Similar to the last reporting period, roughly 87% of the total global volume originated from only two countries: Russia and Turkey.
Twitter also received 10% more government information requests (combined emergency disclosure requests and non-emergency requests), which is the largest percentage increase since our July-December 2015 report.
The new Twitter Rules enforcement
The new Twitter Rules enforcement section within the Transparency Report is a significant milestone on our transparency journey. It provides data and insights into the following areas of our enforcement approach: abuse, hateful conduct, private information, child sexual exploitation, sensitive media, and violent threats.
These categories represent some of the most common reports that Twitter receives, and our intention is to include more categories in future reports.
This data does not take into account content that has been actioned using technological tools, the goal of which is to limit the reach and spread of potentially abusive content. In addition, not every report we receive is actionable. For example, we receive a large volume of false reports that bad actors have submitted in a coordinated effort to detract from our enforcement capabilities. Furthermore, many of the reports that did not fall into these core categories were actioned under other policies, such as impersonation, unlawful use, evasion of suspension, etc.
With each successive report, we are committed to evolving our approach to make it more comprehensive.
Providing actionable data and insights takes time to aggregate and to make clear for the widest possible audience — our goal with every report. As such, we are continuously working to ensure the material we provide is clear, contextual, and meaningful. We strive to improve how we accurately measure and report, and we look forward to expanding our transparency in this area as we work to improve the health of the public conversation.
Platform manipulation
This also marks the first Twitter Transparency Report in which we are publishing metrics pertaining to our actions to fight spam and other malicious forms of automation. This builds on our recent work to disclose a full database of previously removed content and accounts that had potential links to state-backed information operations.
As we discussed in our June 2018 blog post, we are making strong progress in this area. We challenge millions of potentially spammy accounts every month, requesting additional details, like email address and phone numbers to authenticate them. From January to June, 2018, approximately 75% of accounts challenged ultimately did not pass those challenges and were suspended.
At the same time, the average number of reports we received through our reporting flow continued to drop — from an average of approximately 868,349 in January to approximately 504,259 in June. These report decreases indicate the effectiveness of our proprietary built technology in proactively identifying and challenging accounts at source and at scale. We have also built a new and refined reporting flow so account holders can give us more signals to augment our proactive enforcement strategies.
Twitter does not tolerate any material that features or promotes child sexual exploitation — whether in Direct Message or elsewhere throughout the service. This includes media, text, illustrations, or computer-generated images. When we remove content, we immediately report it to the National Center for Missing and Exploited Children (NCMEC). NCMEC makes reports available to the appropriate law enforcement agencies around the world to facilitate investigations and prosecutions.
In the reporting period of January 1, 2018, to June 30, 2018, we suspended a total of 487,363 accounts for violations related to child sexual exploitation. 97% of those accounts were proactively flagged by a combination of technology, including PhotoDNA, and other purpose-built internal proprietary tools.
We will continue to aggressively fight online child sexual abuse, as well as invest in the technology and tools that are essential to our zero tolerance enforcement policy. We continue to welcome partnerships around the world with law enforcement to continue our rigorous and significantly scaled operations on these types of egregious content.
Removing terrorist content
We continue our efforts to eradicate content from our platform that violates the Twitter rule prohibiting the promotion of terrorism. We suspended a total of 205,156 accounts under this policy in the period of January 1, 2018, through June 30, 2018. Of those suspensions, 91% consisted of accounts that were proactively flagged by internal, proprietary tools. Continuing the trend we have seen for some time, the number of reports we received from governments of terrorist content decreased by 77% compared to the previous reporting period. Government reports now constitute less than 0.1% of all suspensions in the reported time period as a result of the scale of our technological approach.
The full Twitter Transparency Report with updated details and data covering January through June 2018 can be found here. As part of our longstanding commitment to #transparency, we will continue to evolve and expand this report, and we look forward to sharing these updates in the future.
Did someone say … cookies?
X and its partners use cookies to provide you with a better, safer and
faster service and to support our business. Some cookies are necessary to use
our services, improve our services, and make sure they work properly.
Show more about your choices.