Transparency is core to the work we do at Twitter.
The open nature of our service has led to unprecedented challenges around protecting freedom of expression and privacy rights as governments around the world increasingly attempt to intervene in this open exchange of information. We believe that transparency is a key principle in our mission to protect the Open Internet, and advancing the Internet as a global force for good.
The fundamental belief in the power of open, public conversation inspired Twitter to launch one of the industry's first transparency reports back in 2012. At that time, our goal was to provide the public with regular insights into government pressure that impacted the public, whether through overt attempts at political censorship or by way of soliciting account data through information requests.
The world has changed significantly since 2012. In 2020, it is more important than ever that we shine a light on our own practices, including how we enforce the Twitter Rules, our ongoing work to disrupt global state-backed information operations, and the increased attempts by governments to request information about account holders.
Our new Twitter Transparency Center
We have reimagined and rebuilt our biannual Twitter Transparency Report site to become a comprehensive Twitter Transparency Center. Our goal with this evolution is make our transparency reporting more easily understood and accessible to the general public.
What’s new?
Reports will be published in Arabic, Turkish, Spanish, German, French, Japanese, and Portuguese very soon too, and we are continuing to iterate on the process to further contextualize the data.
Our work to increase transparency efforts across the company is tireless and constant. We will continue to work on increasing awareness and understanding about how our policies work and our practices around content moderation, data disclosures and other critical areas. In addition, we will take every opportunity to highlight the actions of law enforcement, governments, and other organizations that impact Twitter and the people who use our service across the world.
The data
The latest data reflects the period July 1 to December 31, 2019. We endeavour to release this material as soon as possible every six months but due to a number of factors, including the COVID-19 pandemic and getting the new Twitter Transparency Center up and running, we have faced delays. The next update to the data will cover the period of January - June 2020.
Our work on information operations:
Our archive of state-backed information operations is updated on a rolling basis after we identify and remove them from Twitter. We have also increased our cadence of disclosures, recently sharing our largest disclosure to date with 32,242 accounts added to the archive.
This archive, used by researchers, journalists and experts around the world, now spans more than 9 terabytes of media, includes over 83,000 accounts, and over 200 million Tweets and is an industry-first resource. We’ve now released datasets of information operations originating in more than 15 countries, offering researchers unique insight into how information operations unfold on the service.
We’re also expanding how we work with partners in the research community to improve understanding of information operations and disinformation. Earlier this year we strengthened our partnership with two leading research institutions — the Australian Strategic Policy Institute (ASPI) and the Stanford Internet Observatory — to enable their analysis and review of data related to our disclosures.
We also hosted our first ever #InfoOps2020 conference in partnership with Carnegie's Partnership for Countering Influence Operations. The event brought together academic experts, industry, and government to discuss opportunities for collaboration and research on IO and support an open exchange of ideas between Twitter and the research community.
Platform Manipulation:
Our blog from earlier this year gives a thorough explanation about our proactive work to counter platform manipulation across the service and the common misconceptions around so-called ‘bots’ on Twitter. Our policies in this area focus on behaviour, not content, and are written in a way that targets the spammy tactics different people or groups could use to try to manipulate the public conversation on Twitter.
Continuing a year-on-year trend, our proactive detection of this behavior has resulted in an almost 10% reduction in anti-spam challenges, e.g. when we ask people to provide a phone number or email address or fill in a ReCAPTCHA code to verify there is a human behind an account.
Terrorism & violent extremism:
The Twitter Rules prohibit the promotion of terrorism and violent extremism. Action was taken on 86,799 unique accounts under this policy during this reporting period. 74% of the unique accounts were proactively suspended using our internal, proprietary tools. We continue our close partnership with our peers as part of the Christchurch Call to Action and are committed to eradicating the presence of violent extremist content across our respective services.
Child sexual exploitation:
We do not tolerate child sexual exploitation on Twitter. Child sexual exploitation (CSE) including links to images of or content promoting child exploitation, is removed from the site without further notice and reported to The National Center for Missing & Exploited Children (NCMEC). People can report content that appears to violate the Twitter Rules regarding Child Sexual Exploitation via our web form and we also investigate other reports via various reporting flows in-app for CSE too. There were 257,768 unique accounts suspended during this reporting period for violating Twitter policies prohibiting child sexual exploitation. 84% of those unique accounts were proactively suspended using a combination of technologies (including PhotoDNA and internal, proprietary tools).
Twitter Rules enforcement:
For the first time, we are expanding the scope of this section to better align with the Twitter Rules, and sharing more granular data on violated policies. This is in line with best practices under the Santa Clara Principles on Transparency and Accountability in Content Moderation.
Due to our increased focus on proactively surfacing violative content for human review, more granular policies, better reporting tools, and also the introduction of more data across twelve distinct policy areas, we have seen a 47% increase in accounts locked or suspended for violating the Twitter Rules. This work will never be stagnant and these stats should fluctuate as we improve and the challenge evolves. The increase is also reflective of a trend we’ve observed across our recent Twitter Transparency Reports, as we step up the level of proactive enforcement across the service and invest in technological solutions to respond to the changing characteristics of bad-faith activity on our service.
Legal requests:
In addition to enforcing the Twitter Rules, we also may take action in response to legal requests.
Information requests (legal requests for account information):
Removal requests (legal requests for content removal)*:
Copyright & trademark actions:
This report reflects not only the evolution of the public conversation on our service but the work we do every day to protect and support the people who use Twitter.
Follow @Policy and @TwitterSafety for continued updates on the changes we make across the company to drive meaningful and intuitive transparency.
*Unless prohibited from doing so, we continue to publish legal requests when we take action directly to the Lumen Database, a partnership with Harvard’s Berkman Klein Center for Internet & Society.
Did someone say … cookies?
X and its partners use cookies to provide you with a better, safer and
faster service and to support our business. Some cookies are necessary to use
our services, improve our services, and make sure they work properly.
Show more about your choices.