The term “The real-time web” has become popular as a way to describe burgeoning trends and technologies related to consuming web content as soon as it is created. However like popular buzz phrases such as “services oriented architecture” and “web 2.0” which came before it, there is often difficulty in understanding where the technical details end and where the hype begins. Given that this trend is a fundamental shift in how many users interact with the web, it is a good idea for developers to have a clear idea of the key concepts and implementations options available to them as they bring their applications into real-time web.
When people talk about the real-time web technologies, they are usually talking about one or more of the following features
Refreshing a web page as new updates are available without reloading the page. A good example of this is seen when performing a search on Twitter and you’ll notice that a yellow bar with a constantly updated number of tweets since you started searching is displayed.
Receiving notifications on content updates as soon as they happen instead of polling. This is primarily about moving away from RSS’s model of polling every couple of minutes or hours and instead having an end point that gets messages delivered as soon as they happen. An example of this is the fact that user status updates from Twitter appear within a second on FriendFeed when the user has hooked up both services. This is in contrast to how long it takes blog posts to show up in the typical RSS reader from when they are published.
Some people consider the universe of status updates on sites like Facebook and Twitter to be the real-time web. For these people, the key interesting technology in this space is the ability to consume a neverending feed of content from these sites (aka a fire hose) and provide search functionality over this data.
We can now take a look at some of the underlying technologies that make some of these user experiences and scenarios possible.
Most web developers should be familiar with the concept of Asynchronous Javascript and XML (AJAX) which enables the creation of dynamic webpages that can be partially updated without having to reload the site. A traditional AJAX interaction involves the user interacting with part of the page and then the browser submitting the request to the server and then rebuilding that part of the page with the results from the request. However it turns out that there are many situations where an application may want to update parts of a page without waiting for user interaction such as displaying live stock tickers, instant messaging scenarios or showing feedback on an article as comments are posted. The set of approaches to solving this term are typically described using the name COMET.
COMET typically refers to keeping a permanent open connection between a browser and a server using a number of techniques. One approach is the hidden iframe technique. With this technique you create an inline frame (i.e. an iframe) that is hidden from the user and then have the frame slowly filled with content as events occur on the server. This takes advantage of the fact that a browser will keep an open connection to the server as long as a page has not fully loaded. There’s a great example of what generating one of these invisible iframes looks like on the server side in the article How to implement COMET with PHP
<?php header("Cache-Control: no-cache, must-revalidate"); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); flush(); ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Comet php backend</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <script type="text/javascript"> // KHTML browser don't share javascripts between iframes var is_khtml = navigator.appName.match("Konqueror") || navigator.appVersion.match("KHTML"); if (is_khtml) { var prototypejs = document.createElement('script'); prototypejs.setAttribute('type','text/javascript'); prototypejs.setAttribute('src','prototype.js'); var head = document.getElementsByTagName('head'); head[0].appendChild(prototypejs); } // load the comet object var comet = window.parent.comet; </script> <?php while(1) { echo '<script type="text/javascript">'; echo 'comet.printServerTime('.time().');'; echo '</script>'; flush(); // used to send the echoed data to the client sleep(1); // a little break to unload the server CPU } ?> </body> </html>
As you can see from this example, the page will never finish rendering which means the browser always has an open connection to the server. It should also be noted that the payload of each new event is the Javascript function that you want the browser to execute to update the relevant part of the page. This technique works in most common browsers since it uses established technologies like iframes and Javascript. The main problems are that since it is somewhat of a hack, it is somewhat opaque to for web applications to determine the current state of the communication between the browser and the server (e.g. error handling).
Another common technique is long polling. With this approach the browser application makes an asynchronous request for data from the server either using XMLHttpRequest or a script tag. Once data is returned, another request of the same type is made. The essentially permanently keeps an open connection between the browser and the server. This approach is often favored by developers because it reuses common AJAX techniques and doesn’t require any special client-side techniques. The main challenge with long polling and other COMET techniques is choosing a server-side framework that can solve the C10K problem. Specifically, traditional web servers are designed to handle short-lived connections between browsers and the server due to the request/response nature of HTTP. With COMET, we can have thousands to tens of thousands of browsers keeping an open connection to a server. To address this problem there are now a number of dedicated COMET application servers with some of the more notable implementations ones being FriendFeed's Tornado and Jetty.
The W3C’s HTML 5 working group is working on making COMET part of the next generation of HTML with the creation of the WebSockets specification. This formalizes the notion of an XMLHttpRequest-style object that can create a permanent bi-directional connection to the server without the hacks of long polling (i.e. re-establishing a connection whenever data is sent) and is also resistant to some of the issues long polling faces when going through HTTP intermediaries such as firewalls and proxy servers.
Content syndication using XML syndication technologies such as RSS and Atom is a key part of how many websites and web users consume content today. However XML syndication is traditionally not a real-time experience because feed readers work by polling a feed at specific intervals which could be anything from minutes to hours apart. Since polling is an inefficient way to get content updates it isn’t feasible to get updates within seconds from when they happen without overloading the service that is being polled. This calls for new communications patterns where clients are notified of changes as they happen instead of having to poll for them with the time lag that this entails.
To solve this problem, a couple of Google employees have proposed the PubSubHubbub protocol (commonly abbreviated as PuSH) as a way to bring real-time notifications to content syndication on the Web. The workflow for a PuSH system is as follows
PubSubHubbub provides a more complete solution that other approaches such as Twitter’s firehose. With explicit subscription, services can use PuSH to get notified about updates that are either private or require authentication. With a firehose approach, all public content generated on the site is shared with all subscribers while private content is not provided since it isn’t really feasible to cherry pick which authenticated content should go to which consumer of the firehose.
There are already a number of sites consuming and producing PubSubHubbub including MySpace, LiveJournal, Google Reader, Tumblr and FriendFeed.
To many social media observers the real-time web refers to the ecosystem that has sprung up around status updates on sites like Twitter and Facebook. Being able to analyze status updates as they occur to determine people’s sentiments on a news event as it occurs or detect breaking news is a rapidly growing space with lots of players including Microsoft’s Bing, Tweetmeme and Sysomos among others. These services typically work by consuming a never ending stream of updates from the target services and then processing the updates as they are received. Twitter calls this never ending stream of updates a “firehose” and I’ll use this term to describe similar offerings from other services.
A firehose works similar to the COMET servers described earlier. A client connects to a server via HTTP and then starts to receive a stream of updates as they occur in some structured format. An early version of such a firehose is the SixApart Update Stream which is used to provide real-time feeds on changes to TypePad, Vox and [formerly] LiveJournal blogs to interested parties. From the SixApart developer documentation
Connecting to the Stream To connect to the stream a simple HTTP GET request is issued to the following endpoint: http://updates.sixapart.com/atom-stream.xml Once a connection is established, the Atom Server will then begin transmitting to the client any content that is injected into the stream. Additionally, the Atom Stream Server transmits timestamps every second both to keep the connection alive (in case it goes idle), and to provide you a marker so you know how far you've gotten so you can reconnect at a certain point in time if you restart your listener. Example Stream GET /atom-stream.xml HTTP/1.0 Host: updates.sixapart.com HTTP/1.0 200 OK Content-Type: text/xml Connection: close <?xml version="1.0"?> <time>1834721912342</time> <time>1834721912372</time> <time>1834721912402</time> <feed> ... </feed> <feed> ... </feed> <sorryTooSlow youMissed="5" />
To connect to the stream a simple HTTP GET request is issued to the following endpoint: http://updates.sixapart.com/atom-stream.xml
http://updates.sixapart.com/atom-stream.xml
Once a connection is established, the Atom Server will then begin transmitting to the client any content that is injected into the stream. Additionally, the Atom Stream Server transmits timestamps every second both to keep the connection alive (in case it goes idle), and to provide you a marker so you know how far you've gotten so you can reconnect at a certain point in time if you restart your listener.
GET /atom-stream.xml HTTP/1.0 Host: updates.sixapart.com HTTP/1.0 200 OK Content-Type: text/xml Connection: close <?xml version="1.0"?> <time>1834721912342</time> <time>1834721912372</time> <time>1834721912402</time> <feed> ... </feed> <feed> ... </feed> <sorryTooSlow youMissed="5" />
This is very similar to the Twitter Streaming API with the primary differences being supported data formats (Twitter supports both JSON and XML output), support for filter predicates such as being limited to a firehose of posts containing references to “superbowl” and the fact that Twitter also provides notifications of deleted status updates in the stream.
Applications that consume such streams need to be carefully coded to handle falling behind and being able to restart from where they stopped if disconnected for any reason.
If you find this topic interesting, I’ll be speaking about the real-time Web at two industry conferences next month with experts from noted Web companies.
MIX 10: Building Platforms and Applications for the Real-Time Web From news feeds to search, the Web has become all about real-time access to news and other information as it happens. This panel will discuss what it takes to build the platforms and user experiences that power some of the most notable services on the real-time web. Come hear a lively discussion about the real-time web with moderator Dare Obasanjo (Microsoft) and panelists Ari Steinberg (Facebook), Brett Slatkin (Google), Chris Saad (JS-Kit Echo), Lili Cheng (Microsoft) and Ryan Sarver (Twitter).
SXSW: Can the Real-Time Web Be Realized? The emergence of the real-time web enables an unprecedented level of user engagement and dynamic content online. However, the rapidly growing audience puts new, complex demands on the architecture of the web as we know it. This panel will discuss what is needed to make the real-time web achievable. Organizer: Brett Slatkin.
Now Playing: Young Money - Bedrock (featuring Lloyd)