Today, we’re open sourcing Twemproxy, a fast, light-weight proxy for the memcached protocol. Twemproxy was primarily developed to reduce open connections to our cache servers. At Twitter, we have thousands of front-end application servers, and we run multiple Unicorn instances on each one. Each instance establishes at least one connection to every cache server. As we add front-ends and increase the number of Unicorns per front-end, this connection count continues to increase. As a result the performance of the cache server degrades due to per-connection overhead.
We use Twemproxy to bound the number of connections on the cache servers by deploying it as a local proxy on each of the front-ends. The Unicorns on each front-end connect to the cache servers through the local proxy, which multiplexes and pipelines the front-end requests over a single connection to each cache server. This reduces the number of connections to each cache server by a significant factor.
We have been running Twemproxy in production for six months, during which time we’ve increased the number of front-ends by 30%. Twemproxy has been instrumental in allowing front-end and cache server pools to scale independently as capacity requirements change.
Twemproxy can:
Feel free to use, fork and contribute back to Twemproxy if you can. If you have any questions, feel free to file any issues on GitHub.
Did someone say … cookies?
X and its partners use cookies to provide you with a better, safer and
faster service and to support our business. Some cookies are necessary to use
our services, improve our services, and make sure they work properly.
Show more about your choices.