Introducing our Responsible Machine Learning Initiative

By and
Wednesday, 14 April 2021

The journey to responsible, responsive, and community-driven machine learning (ML) systems is a collaborative one. Today, we want to share more about the work we’ve been doing to improve our ML algorithms within Twitter, and our path forward through a company-wide initiative called Responsible ML.

Responsible ML consists of the following pillars:

  • Taking responsibility for our algorithmic decisions
  • Equity and fairness of outcomes
  • Transparency about our decisions and how we arrived at them
  • Enabling agency and algorithmic choice

Responsible technological use includes studying the effects it can have over time. When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product. 

Who’s involved and the actions we're taking

Technical solutions alone do not resolve the potential harmful effects of algorithmic decisions. Our Responsible ML working group is interdisciplinary and is made up of people from across the company, including technical, research, trust and safety, and product teams. 

Leading this work is our ML Ethics, Transparency and Accountability (META) team: a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms we use and to help Twitter prioritize which issues to tackle first.

Here’s how we’re approaching this initiative:

Researching and understanding the impact of ML decisions. We’re conducting in-depth analysis and studies to assess the existence of potential harms in the algorithms we use. Here are some analyses you will have access to in the upcoming months:

  • A gender and racial bias analysis of our image cropping (saliency) algorithm
  • A fairness assessment of our Home timeline recommendations across racial subgroups
  • An analysis of content recommendations for different political ideologies across seven countries 
  • Applying our learnings to improve Twitter. The most impactful applications of responsible ML will come from how we apply our learnings to build a better Twitter. The META team works to study how our systems work and uses those findings to improve the experience people have on Twitter. This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community. The results of this work may not always translate into visible product changes, but it will lead to heightened awareness and important discussions around the way we build and apply ML.

    We’re also building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them. We’re currently in the early stages of exploring this and will share more soon.

    Sharing our learnings and asking for feedback. Both inside and outside of Twitter, we will share our learnings and best practices to improve the industry’s collective understanding of this topic, help us improve our approach, and hold us accountable. This may come in the form of peer-reviewed research, data-insights, high-level descriptions of our findings or approaches, and even some of our unsuccessful attempts to address these emerging challenges. We’ll continue to work closely with third party academic researchers to identify ways we can improve our work and encourage their feedback.

    The public plays a critical role in shaping Twitter and ResponsibleML is no different. Public feedback is particularly important as we assess the fairness and equity of the automated systems we use. Better, more informed decisions are made when the people who use Twitter are part of the process, and we’re looking to create more opportunities for people to share their thoughts on how ML is used on Twitter. 

    What’s Next? 

    Responsible ML is a long journey in its early days. We want to explore it with a spirit of openness with the goal of contributing positively to the field of technology ethics. If you have any questions about Responsible ML, or the work META’s doing, feel free to ask us using #AskTwitterMETA. If you’d like to help, join us


This post is unavailable
This post is unavailable.