Adam Bosworth has posted his ISCOC04 talk on his weblog. The post is interesting although I disagreed with various bits and pieces of it. Below are some comments in response to various parts of his talk
On the one hand we have RSS 2.0 or Atom. The documents that are based on these formats are growing like a bay weed. Nobody really cares which one is used because they are largely interoperable. Both are essentially lists of links to content with interesting associated metadata. Both enable a model for capturing reputation, filtering, stand-off annotation, and so on. There was an abortive attempt to impose a rich abstract analytic formality on this community under the aegis of RDF and RSS 1.0. It failed. It failed because it was really too abstract, too formal, and altogether too hard to be useful to the shock troops just trying to get the job done. Instead RSS 2.0 and Atom have prevailed and are used these days to put together talk shows and play lists (podcasting) photo albums (Flickr), schedules for events, lists of interesting content, news, shopping specials, and so on. There is a killer app for it, Blogreaders/RSS Viewers.
Although it is clear that RSS 2.0 seems to be edging out RSS 1.0, I wouldn't say it has failed per se. I definitely wouldn't say it failed for being too formal and abstract. In my opinion it failed because it was more complex with no tangible benefit. This is the same reason XHTML has failed when compared to HTML. This doesn't necessarily mean that more rigid sysems will fail to take hold when compared to less rigid systems, if so we'd never have seen the shift from C to C++ then from C++ to C#/Java.
Secondly, it is clear It seems Adam is throwing out some Google spin here by trying to lump the nascent and currently in-progress Atom format in the same group as RSS 2.0. In fact, if not for Google jumping on the Atom bandwagon it would even be more of an intellectual curiousity than RSS 1.0.
As I said earlier, I remember listening many years ago to someone saying contemptuously that HTML would never succeed because it was so primitive. It succeeded, of course, precisely because it was so primitive. Today, I listen to the same people at the same companies say that XML over HTTP can never succeed because it is so primitive. Only with SOAP and SCHEMA and so on can it succeed. But the real magic in XML is that it is self-describing. The RDF guys never got this because they were looking for something that has never been delivered, namely universal truth. Saying that XML couldn't succeed because the semantics weren't known is like saying that Relational Databases couldn't succeed because the semantics weren't known or Text Search cannot succeed for the same reason. But there is a germ of truth in this assertion. It was and is hard to tell anything about the XML in a universal way. It is why Infopath has had to jump through so many contorted hoops to enable easy editing. By contrast, the RSS model is easy with an almost arbitrary set of known properties for an item in a list such as the name, the description, the link, and mime type and size if it is an enclosure. As with HTML, there is just enough information to be useful. Like HTML, it can be extended when necessary, but most people do it judiciously. Thus Blogreaders and aggregators can effortlessly show the content and understanding that the value is in the information. Oh yes, there is one other difference between Blogreaders and Infopath. They are free. They understand that the value is in the content, not the device.
Lots of stuff to agree with and disagree with here. Taking it from the top, the assertion that XML is self-describing is a myth. XML is a way to attach labels to islands of data, the labels are only useful if you know what they mean. Where XML shines is that one can start with a limited set of labels that are widely understood (title, link, description) but attach data with labels that are less likely to be understood (wfw:commentRss, annotate:reference, ent:cloud) without harming the system. My recent talk at XML 2004, Designing XML Formats: Versioning vs. Extensibility, was on the importance of this and how to bring this flexibility to the straitjacketed world of XML Schema.
I also wonder who the people are that claim that XML over HTTP will never succeed. XML over HTTP already has in a lot of settings. However I'd question that it is all you need. The richer the set of interactions allowed by the web site the more an API is needed. Google, Amazon and eBay all have XML-based APIs. Every major blogging tool has an XML-based API even though those same tools are using vanilla XML over HTTP for serving RSS feeds. XML over HTTP can succeed in a lot of settings but as the richness of the interaction between client and server grows so also does the need for a more powerful infrastructure.
The issue is knowing how to pick right tool for the job. You don't need the complexity of the entire WS-* stack to build a working system. I know a number of people at Microsoft realize that this message needs to get out more which is why you've begun to see things like Don Box's WS-Why Talk and the WS Kernel.
What has been new is information overload. Email long ago became a curse. Blogreaders only exacerbate the problem. I can't even imagine the video or audio equivalent because it will be so much harder to filter through. What will be new is people coming together to rate, to review, to discuss, to analyze, and to provide 100,000 Zagat's, models of trust for information, for goods, and for services. Who gives the best buzz cut in Flushing' We see it already in eBay. We see it in the importance of the number of deals and the ratings for people selling used books on Amazon. As I said in my blog, My mother never complains that she needs a better client for Amazon. Instead, her interest is in better community tools, better book lists, easier ways to see the book lists, more trust in the reviewers, librarian discussions since she is a librarian, and so on. This is what will be new. In fact it already is. You want to see the future. Don't look at Longhorn. Look at Slashdot. 500,000 nerds coming together everyday just to manage information overload. Look at BlogLines. What will be the big enabler' Will it be Attention.XML as Steve Gillmor and Dave Sifry hope' Or something else less formal and more organic' It doesn't matter. The currency of reputation and judgment is the answer to the tragedy of the commons and it will find a way. This is where the action will be. Learning Avalon or Swing isn't going to matter. Machine learning and inference and data mining will. For the first time since computers came along, AI is the mainstream.
I tend to agree with most of this although I'm unsure why he feels the need to knock Longhorn and Java. What he seems to be overlooking is that part of the information overload problem is the prevalance of poor data visualization and user interface metaphors for dealing with significant amounts of data. I know believe that one of the biggest mistakes I made in the initial design of RSS Bandit was modelling it after mail readers like Outlook even though I knew lots of people who had difficulty managing the flood of email they get using them. This is why the next version of RSS Bandit will borrow a leaf from FeedDemon along with some other tricks I have up my sleeve.
A lot of what I do in RSS Bandit is made easy due to the fact that it's built on the .NET Framework and not C++/MFC so I wouldn't be as quick to knock next generation GUI frameworks as Adam is. Of course, now that he works for a Web company the browser is king.