As a regular user of Twitter I've felt the waves of frustration wash over me these past couple of weeks as the service has been hit by one outage after another. This led me to start pondering the problem space [especially as it relates to what I'm currently working on at work] and deduce that the service must have some serious architectural flaws which have nothing to do with the reason usually thrown about by non-technical pundits (i.e. Ruby on Rails is to blame).
Some of my suspicions were confirmed by a recent post on the Twitter developer blog entitled Twittering about architecture which contains the following excerpt
Twitter is, fundamentally, a messaging system. Twitter was not architected as a messaging system, however. For expediency's sake, Twitter was built with technologies and practices that are more appropriate to a content management system. Over the last year and a half we've tried to make our system behave like a messaging system as much as possible, but that's introduced a great deal of complexity and unpredictability. When we're in crisis mode, adding more instrumentation to help us navigate the web of interdependencies in our current architecture is often our primary recourse. This is, clearly, not optimal. Our direction going forward is to replace our existing system, component-by-component, with parts that are designed from the ground up to meet the requirements that have emerged as Twitter has grown.
Twitter is, fundamentally, a messaging system. Twitter was not architected as a messaging system, however. For expediency's sake, Twitter was built with technologies and practices that are more appropriate to a content management system. Over the last year and a half we've tried to make our system behave like a messaging system as much as possible, but that's introduced a great deal of complexity and unpredictability. When we're in crisis mode, adding more instrumentation to help us navigate the web of interdependencies in our current architecture is often our primary recourse. This is, clearly, not optimal.
Our direction going forward is to replace our existing system, component-by-component, with parts that are designed from the ground up to meet the requirements that have emerged as Twitter has grown.
Given that Twitter has some unique requirements that would put to test a number of off-the-shelf and even custom messaging applications, it is shocking that it isn't even architected as a messaging system. This makes sense when you consider the background of the founders was a blogging tool and they originally intended Twitter to be a "micro" blogging service.
If Twitter was simply a micro-content publishing tool with push notifications for SMS and IM then the team wouldn't be faulted for designing it as a Content Management System (CMS). In that case you'd just need three data structures
Unfortunately, Twitter isn't just a blogging tool that allows people to subscribe to my posts via SMS & IM instead of just RSS. It also has the notion of followers. That's when things get hairy. Isreal over at AssetBar had a great post about this entitled Twitter-proxy: Any Interest? where he wrote
Consider the messaging problem: Nothing is as easy as it looks. When Robert Scoble writes a simple “I’m hanging out with…” message, Twitter has about two choices of how they can dispatch that message: PUSH the message to the queue’s of each of his 6,864 24,875 followers, or Wait for the 6,864 24,875 followers to log in, then PULL the message. The trouble with #2 is that people like Robert also follow 6,800 21,146 people. And it’s unacceptable for him to login and then have to wait for the system to open records on 6,800 21,146 people (across multiple db shards), then sort the records by date and finally render the data. Users would be hating on the HUGE latency. So, the twitter model is almost certainly #1. Robert’s message is copied (or pre-fetched) to 6,864 users, so when those users open their page/client, Scoble’s message is right there, waiting for them. The users are loving the speed, but Twitter is hating on the writes. All of the writes. How many writes? A 6000X multiplication factor: Do you see a scaling problem with this scenario? Scoble writes something–boom–6,800 21,146 writes are kicked off. 1 for each follower. Michael Arrington replies–boom–another 6,600 17,918 writes. Jason Calacanis jumps in –boom–another 6,500 25,972 writes. Beyond the 19,900 65,036 writes, there’s a lot of additional overhead too. You have to hit a DB to figure out who the 19,900 65,036 followers are. Read, read, read. Then possibly hit another DB to find out which shard they live on. Read, read, read. Then you make a connection and write to that DB host, and on success, go back and mark the update as successful. Depending on the details of their messaging system, all the overhead of lookup and accounting could be an even bigger task than the 19,900 65,036 reads + 19,900 65,036 writes. Do you even want to think about the replication issues (multiply by 2 or 3)? Watch out for locking, too. And here’s the kicker: that giant processing & delivery effort–possibly a combined 100K 130K disk IOs– was caused by 3 users, each just sending one, tiny, 140 char message. How innocent it all seemed.
Nothing is as easy as it looks. When Robert Scoble writes a simple “I’m hanging out with…” message, Twitter has about two choices of how they can dispatch that message:
The trouble with #2 is that people like Robert also follow 6,800 21,146 people. And it’s unacceptable for him to login and then have to wait for the system to open records on 6,800 21,146 people (across multiple db shards), then sort the records by date and finally render the data. Users would be hating on the HUGE latency.
So, the twitter model is almost certainly #1. Robert’s message is copied (or pre-fetched) to 6,864 users, so when those users open their page/client, Scoble’s message is right there, waiting for them. The users are loving the speed, but Twitter is hating on the writes. All of the writes.
How many writes?
Do you see a scaling problem with this scenario?
Scoble writes something–boom–6,800 21,146 writes are kicked off. 1 for each follower. Michael Arrington replies–boom–another 6,600 17,918 writes. Jason Calacanis jumps in –boom–another 6,500 25,972 writes.
Scoble writes something–boom–6,800 21,146 writes are kicked off. 1 for each follower.
Michael Arrington replies–boom–another 6,600 17,918 writes.
Jason Calacanis jumps in –boom–another 6,500 25,972 writes.
Beyond the 19,900 65,036 writes, there’s a lot of additional overhead too. You have to hit a DB to figure out who the 19,900 65,036 followers are. Read, read, read. Then possibly hit another DB to find out which shard they live on. Read, read, read. Then you make a connection and write to that DB host, and on success, go back and mark the update as successful. Depending on the details of their messaging system, all the overhead of lookup and accounting could be an even bigger task than the 19,900 65,036 reads + 19,900 65,036 writes. Do you even want to think about the replication issues (multiply by 2 or 3)? Watch out for locking, too.
And here’s the kicker: that giant processing & delivery effort–possibly a combined 100K 130K disk IOs– was caused by 3 users, each just sending one, tiny, 140 char message. How innocent it all seemed.
Not only does Isreal's post accurately describes the problem with the logical model for Twitter's "followers" feature, it looks like he may have also nailed the details of their implementation which would explain the significant issues they've had scaling the site. The problem is that if you naively implement a design that simply reflects the problem statement then you will be in disk I/O hell. It won't matter if you are using Ruby on Rails, Cobol on Cogs, C++ or hand coded assembly, the read & write load will kill you.
This leads me to my new mantra which I've stolen from Jim Gray via Pat Helland; DISK IS THE NEW TAPE.
In addition, the fact that the Twitter folks decided not to cap the number of followers or following may have saved them from the kind of flames that Robert Scoble sends at Facebook for having a 5000 friend limit but it also means that they not only have to deal with supporting users with thousands of followers but also users that follow thousands of users [both of which would be optimized in completely different ways]. Clearly a lot of feature decisions have been made on that product without thought to how they impact the scalability of the service.
PS: If this problem space sounds interesting to you, we're hiring. I'm specifically looking for good operations folks. Shoot me a resume at dareo@msft.com - replace msft.com with microsoft.com
Now Playing: Rick Ross - Maybach Music (featuring Jay-Z)