Searched for : derek denny-brown
October 1, 2008
@ 04:22 PM

Werner Vogels, CTO of Amazon, writes in his blog post Expanding the Cloud: Microsoft Windows Server on Amazon EC2 that

With today's announcement that Microsoft Windows Server is available on Amazon EC2 we can now run the majority of popular software systems in the cloud. Windows Server ranked very high on the list of requests by customers so we are happy that we will be able to provide this.

One particular area that customers have been asking for Amazon EC2 with Windows Server was for Windows Media transcoding and streaming. There is a range of excellent codecs available for Windows Media and there is a large amount of legacy content in those formats. In past weeks I met with a number of folks from the entertainment industry and often their first question was: when can we run on windows?

There are many different reasons why customers have requested Windows Server; for example many customers want to run ASP.NET websites using Internet Information Server and use Microsoft SQL Server as their database. Amazon EC2 running Windows Server enables this scenario for building scalable websites. In addition, several customers would like to maintain a global single Windows-based desktop environment using Microsoft Remote Desktop, and Amazon EC2 is a scalable and dependable platform on which to do so.

This is great news. I'm starting a month long vacation as a precursor to my paternity leave since the baby is due next week and was looking to do some long overdue hacking in-between burping the baby and changing diapers. My choices were

  • Facebook platform app
  • Google App Engine app
  • EC2/S3/EBS app

The problem with Amazon was the need to use Linux which I haven't seriously messed with since my college days running SuSe. If I could use Windows Server and ASP.NET while still learning the nuances of EC2/S3/EBS that would be quite sweet.

I wonder how who I need to holler at to get in the Windows Server on EC2 beta? Maybe Derek can hook me up. Hmmmmm.

Note Now Playing: Lil Wayne - Best Rapper Alive [Explicit] Note


 

Categories: Platforms | Web Development

Every once in a while someone asks me about software companies to work for in the Seattle area that aren't Microsoft, Amazon or Google. This is the third in a series of weekly posts about startups in the Seattle area that I often mention to people when they ask me this question.

AgileDelta builds XML platforms for mobile devices that are optimized for low power, low bandwidth devices. They have two main products; Efficient XML and Mobile Information Client. I'm more familiar with the Efficient XML since it has been selected as the basis for the W3C's binary XML format and has been a lynch pin for a lot of the debate around binary XML.  The Efficient XML product is basically a codec which allows you to create and consume XML in their [soon to be formerly] proprietary binary format that makes it more efficient for use in mobile device scenarios. A quick look at their current customer lists indicates that their customer base is mostly military and/or defence contractors. I hadn't realized how popular XML was in military circles.  

AgileDelta was founded by John Schneider and Rich Rollman who are formerly of Crossgain, a company founded by Adam Bosworth which was acquired by BEA. Before that Rich Rollman was at Microsoft and he was one of the key folks behind MSXML and SQLXML. Another familiar XML geek who works there is Derek Denny-Brown who spent over half a decade working as a key developer on the XML parsers at Microsoft.

Press: AgileDelta in PR Newswire

Location: Bellevue, WA

Jobs: careers@agiledelta.com, current open positions are for a Software Engineer, Sales Professional, Technical Writer and Quality Assurance Engineer.


 

I've been tagged by Nick Bradbury as part of the 5 Things People Don't Know About Me meme. Here's my list

  1. I've gained back 25 lbs of the 60 lbs I lost earlier this year. With the holidays and an upcoming trip to Las Vegas to attend CES I assume I'll be gaining another 5 lbs due to disruptions to my schedule and poor eating habits before I can get things back under control.

  2. I sold all my stock options when MSFT hit 30 last week.

  3. I used to smile a lot as a child until when I was about 11 or 12. I was in a Nigerian miltary school during my middle school years and some senior students didn't like the fact that I always walked around with a smile on my face. So they decided to beat me until I wiped that silly smile off my face. It worked. My regular scowl was mentioned as a dampener in more first dates than I'd like to admit while I was in college. I'm glad my mom decided to pull me out of the military school after only two years. At the time, I thought that was the equivalent of running away. Mother knows best, I guess.

  4. My dad is in New York this week but I turned down an opportunity to fly up and see him. I found out the details of his trip on Saturday evening which meant I'd have had to break of prior engagements such as baby sitting my girlfriend's kids and taking my intern on his farewell lunch if I wanted to see him. I'm sure I'll regret missing opportunities like this later in life.

  5. I have songs from every G-Unit Radio mixtape on my iPod.

I'm tagging the following bloggers to spread the meme; Mike Torres, Shelley Powers, Sanaz Ahari, Derek Denny-Brown and Doug Purdy


 

Categories: Personal

Don Demsak has a blog post entitled Open Source Projects As A Form Of Community Service which links to a number of blog posts about the death of the NDoc project. He writes

Open source projects have been the talk of the tech blogs recently with the announcement that NDoc 2 is Offcially Dead, along with the mention that the project's sole develop was a victim of an automated mail-bomb attack because the project wasn't getting a .Net 2.0 version out fast enough for their liking.  Kevin has decided to withdraw from the community, and fears for himself and his family.  The .Net blogging community has had a wide range of reactions:

  • Phil Haack talks about his ideas behind helping/saving the open source community and laid down a challenge. 
  • Eric Wise mentions that he will not work on another FOSS project. 
  • Scott Hanselman laments that Microsoft hasn't put together an Ineta like organization to handle giving grants to open source projects, and also shows how easy it is to submit a patch/fix to a project.
  • Peter Provost worries that bringing money into the equation may spoil the cool part of community developed software, and that leadership is the key to good open source projects.
  • Derek Denny-Brown says that "Microsoft needs to understand that Community is more than just lots of vendors creating commercial components, or MVPs answering questions on newsgroups".

I've been somewhat disappointed by the Microsoft developer division's relationship with Open Source projects based on the .NET Framework and it's attitude towards source code availability in general. Derek Denny-Brown's post entitled Less Rambling Venting about Developing for .Net hit the nail on the head for me. There are a number of issues with the developer community around Visual Studio and the .NET Framework that are raised in Derek's post and the others mentioned above. The first, is what seems like a classic case of Not Invented Here (NIH) in how Microsoft has not only failed to support Open Source projects that fill useful niches in the Visual Studio ecosysem but eventually competes with them (Nant vs. MSBuild, NUnit vs. Visual Studio Team System and now Sandcastle vs. NDoc). My opinion is that this is a consequence of Microsoft's strategy of integrated innovation which encourages Microsoft's product teams to pursue a seamless end-to-end experience where every software application in the business process is a Microsoft product. 

Another issue is Microsoft's seeming ambivalence and sometimes antipathy towards Open Source software. This is related to the fact that the ecosystem around Microsoft's software platforms (i.e. customers, ISVs, etc) is heavily tilted towards commercial software development. Or is that vice versa? Either way, commercial software developers tend to view Open Source as the bane of their existence. This is unfortunate given that practically every major software development platform that the .NET Framework and Visual Studio competes with is either Open Source (e.g. PHP, Perl, Python, Ruby) or at the very least encourages source code availability (e.g. Java). Quite frankly, I personally would love to the .NET Framework class libraries being Open Source or at the very least have the source code available in the same way Sun has done with the JDK. I know that there is the Shared Source Common Language Infrastructure (SSCLI) which I have used on occassion when having issues during RSS Bandit development but it isn't the same.

So we have a world where the developer community around Microsoft's products is primarily interested in building and using commercial software while the company pursues an integration strategy that guarantees that it will compete with projects that add value on its platform. The questions then are whether this is a bad thing and if so, how do we fix it?


 

Thanks to numerous reports from RSS Bandit users it has come to my attention that the Atom feeds provided by Google's Blogger are invalid and in many cases aren't even well-formed XML. Please fix this. I'm tired of dealing with threads like Blogspot feeds - XML Failure in our support forums.

If you'd like an example of what is wrong with your feeds. Click on http://feedvalidator.org/check?url=http://nothing-more.blogspot.com/atom.xml which shows the results of validating the feed for Derek Denny-Brown's blog. Below is the list of errors returned

This feed does not validate.

  • line 4, column 0: This feed uses an obsolete namespace [help]

    <feed xmlns="http://purl.org/atom/ns#" version="0.3" xml:lang="en-US">
  • line 4, column 0: Unexpected version attribute on feed element [help]

    <feed xmlns="http://purl.org/atom/ns#" version="0.3" xml:lang="en-US">
  • line 7, column 0: type attribute must be "text", "html", or "xhtml" [help]

    <title mode="escaped" type="text/html">only this, and nothing more</title>
  • line 7, column 0: Unexpected mode attribute on title element (7 occurrences) [help]

    <title mode="escaped" type="text/html">only this, and nothing more</title>
  • line 8, column 0: Undefined feed element: tagline [help]

    <tagline mode="escaped" type="text/html">irregular eccentic eclecticisms, di ...
  • line 11, column 0: Undefined feed element: modified [help]

    <modified>2006-03-27T00:01:47Z</modified>
  • line 12, column 0: Unexpected url attribute on generator element [help]

    <generator url="http://www.blogger.com/" version="5.15">Blogger</generator>
  • line 13, column 0: Undefined feed element: info [help]

    <info mode="xml" type="text/html">
  • line 4, column 0: Missing feed element: updated [help]

    <feed xmlns="http://purl.org/atom/ns#" version="0.3" xml:lang="en-US">
  • line 22, column 0: Undefined entry element: issued (6 occurrences) [help]

    <issued>2006-03-26T15:25:00-08:00</issued>
  • line 23, column 0: Undefined entry element: modified (6 occurrences) [help]

    <modified>2006-03-27T00:01:47Z</modified>
  • line 24, column 0: Undefined entry element: created (6 occurrences) [help]

    <created>2006-03-27T00:01:47Z</created>
  • line 27, column 0: type attribute must be "text", "html", or "xhtml" (6 occurrences) [help]

    <title mode="escaped" type="text/html">You call that Democracy?</title>
  • line 36, column 0: Missing entry element: updated (5 occurrences) [help]

    </entry>
  • line 153, column 156: XML parsing error: <unknown>:153:156: unbound prefix [help]

    ... S-X's niceties. If I knew people on the <st1:place st="on">Vista</st1:pl ...
                                                 ^

In addition, this feed has issues that may cause problems for some users. We recommend fixing these issues.

  • line 5, column 134: service.post is not a registered link relationship (2 occurrences) [help]

    ... hing more" type="application/atom+xml"/>
                                                 ^
  • line 7, column 66: text/html type used for a document fragment [help]

    <title mode="escaped" type="text/html">only this, and nothing more</title>
                                                                      ^
  • line 4, column 0: Missing atom:link with rel="self" [help]

    <feed xmlns="http://purl.org/atom/ns#" version="0.3" xml:lang="en-US">
  • line 18, column 150: service.edit is not a registered link relationship (6 occurrences) [help]

    ... emocracy?" type="application/atom+xml"/>
                                                 ^
  • line 27, column 63: text/html type used for a document fragment (6 occurrences) [help]

    <title mode="escaped" type="text/html">You call that Democracy?</title>
                                                                   ^
  • line 29, column 0: application/xhtml+xml type used for a document fragment (6 occurrences) [help]

    <div xmlns="http://www.w3.org/1999/xhtml">

Thanks for listening.


 

One of the interesting side effects of blogs is that it tells you more about people than you can ever learn from reading a resume or giving an interview. This is both good and bad. It's good because in a professional context it informs you about the kind of people you may or should want to end up working with. It's bad, for the same reasons.

Blogs Make Me Sad
I find all the "Web 2.0" hype pretty disgusting. Everytime I read a discussion about what makes a company "Web 2.0" or not, I feel like I've lost half a dozen IQ points. Everytime I see someone lay on the "Web 2.0" hype I mentally adjust my estimation of their intelligence downward. The only folks this doesn't apply to are probably Tim O'Reilly and John Battelle because I can see the business reasons why they started this hype storm in the first place. Everyone else comes of as a bunch of sheep or just plain idiots.

Some folks are worse than others. These are the self proclaimed "Web 2.0" pundits like Dion Hinchcliffe who's blessed us with massively vacuous missives like Thinking in Web 2.0: Sixteen Ways and Web 2.0 and the Five Walls of Confusion. Everytime I accidentally stumble on one of his posts by following links from other's posts I feel like I've become dumber by having to read the empty hype-mongering. Russell Beattie's WTF 2.0 shows that I'm not the only person who is disgusted by this crap.

I was recently invited to Microsoft's SPARK workshop and was looking forward to the experience until I found out Dion Hinchcliffe would be attending. Since the event aims to be audience driven, I cringe when I think about being stuck in some conference hall with no way to escape listening to vacuous  "Web 2.0" hype till my ears bleed. If I had any sense I'd just not attend the conference, but who turns down a weekend trip to Vegas?

If not for blogs, I wouldn't know about Dion Hinchcliffe and could attend this workshop with great expectations instead of feelings of mild dread.

Blogs make me sad.

Blogs Make Me Happy
One of the things I loved about working on the XML team at Microsoft was all the smart people I got to shoot the breeze with about geeky topics every day. There were people like Michael Brundage, Derek Denny-Brown, Joshua Allen, and Erik Meijer who I felt made me smarter everytime I left their presence.

With blogs I get this feeling from reading the writings of others without having to even be in their presence. For example, there are so many lessons in Shelley Powers's recent post Babble that a summary doesn't do it justice. Just go and read it. It made me smile.

Blogs make me happy.


 

Categories: Ramblings

Scott Isaacs has written a series of posts pointing out one of the biggest limitations of applications that use Asynchronous Javascript  and XML (AJAX). The posts are XMLHttpRequest - Do you trust me?, XMLHttpRequest - Eliminating the Middleman, and XMLHttpRequest - The Ultimate Outliner (Almost) . The limitation is discussed in the first post in the series where Scott wrote

Many web applications that "mash-up" or integrate data from around the web hit the following issue: How do you request data from third party sites in a scalable and cost-effective way? Today, due to the cross-domain restrictions of xmlhttprequest, you must proxy all requests through a server in your domain.

Unfortunately, this implementation is very expensive. If you have 20 unique feeds across 10000 users, you are proxying 200,000 unique requests! Refresh even a percentage of those feeds on a schedule and the overhead and costs really start adding up. While mega-services like MSN, Google, Yahoo, etc., may choose to absorb the costs, this level of scale could ruin many smaller developers. Unfortunately, this is the only solution that works transparently (where user's don't have to install or modify settings).

This problem arises because the xmlhttprequest object can only communicate back to the originating domain. This restriction greatly limits the potential for building "mash-up" or rich aggregated experiences. While I understand the wisdom behind this restriction, I have begun questioning its value and am sharing some of my thoughts on this

I encountered this limitation in the first AJAX application I wrote, the MSN Spaces Photo Album Browser, which is why it requires you to add my domain to your trusted websites list in Internet Explorer to work. I agree with Scott that this is a significant limitation that hinders the potential of various mashups on the Web today. I'd also like to see a solution to this problem proposed. 

In his post, Scott counters a number of the reasons usually given for why this limitation exists such as phishing attacks, cross site scripting and leakage of private data. However Derek Denny-Brown describes the big reason for why this limitation exists in Internet Explorer in his post XMLHttpRequest Security where he wrote

I used to own Microsoft's XMLHttpRequest implementation, and I have been involved in multiple security reviews of that code. What he is asking for is possible, but would require changes to the was credentials (username/password) are stored in Windows' core Url resolution library: URLMON.DLL. Here is a copy of my comments that I posted on his blog entry:

The reason for blocking cross-site loading is primarily because of cached credentials. Today, username/password information is cached, to avoid forcing you to reenter it for every http reference, but that also means that script on yahoo.com would have full access to _everything_ in your gmail/hotmail/bank account, without a pop-up or any other indication that the yahoo page was doing so. You could fix this by associating saved credentials with a src url (plus some trickery when the src was from the same sight) but that would require changes to the guts of windows url support libraries (urlmon.dll)

Comparing XML to CSS or images is unfair. While you can link to an image on another sight, script can't really interact with that image; or example posting that image back to the script's host sight. CSS is a bit more complicated, since the DOM does give you an API for interacting with the CSS, but I have never heard of anyone storing anything private to the user in a CSS resource. At worst, you might be able to figure out the user's favorite color.

Ultimately, it gets back to the problem that there needs to be a way for the user to explicitly enable the script to access those resources. If done properly, it would actually be safer for the user than the state today, where the user has to give out their username and password to sights other than the actual host associated with that login.
I'd love to see Microsoft step up and provide a solution that addresses the security issues. I know I've run against this implementation many times.

That makes sense, the real problem is that a script on my page could go to http://www.example.com/yourbankaccount and it would access your account info because of your cookies & cached credentials. That is a big problem and one that the browser vendors should work towards fixing instead of allowing the status quo to remain.

In fact, a proposal already exists for what this solution would look like from an HTTP protocol and API perspective. Chris Holland has a post entitled ContextAgnosticXmlHttpRequest-An informal RFC where he posts on some of the pros and cons of allowing cross-site access with IXmlHttpRequest but having the option to not send cookies and/or cached credentials.


 

Categories: Web Development

Mini-Microsoft has a blog post on middle management at Middle Managers, Bureaucracy, and No Birds at Microsoft where he offers pointers to some counter arguments to his regular bashing of bureaucracy and middle management at Microsoft. The various linked posts and some of the comments do a good job of presenting an alternate perspective to the various complaints people make against bureaucracy and middle management.

Derek Denny-Brown, a friend of mine who just left Microsoft, has a blog post entitled The curse of Middle management where he writes

I had a long discussion with a friend of mine about Longhorn aka Windows Vista. He had just caught up with news and some of the recent interviews with Jim Allchin. He knew I had some involvement with the OS divisions, and was just generally curious for my perspective 2 weeks out of the company.

In my view, a lot of the problems at Microsoft stem from bad middle management. Microsoft has built up a whole ecology of managers, who are at least as concerned with their career as they are with making good decisions. I've interacted more than I like to admit. The effect is that upper management doesn't hear a clear story about what is really going on. I think the phrase I used was that they 'massage the message'. Combine that with long release cycles and lack of accountability falls out as an inevitability.

One of the reasons I left is because I just don't see any way out of that mess. I am humbled by MiniMicrosoft and his determination to be part of the solution.

I tend to agree with Derek about lack of accountability being a problem here. One thing I've noticed about bad middle managers is that (i) it is almost impossible to get them out of their positions and (ii) all it takes is one or two to seriously derail a project. I've personally been surprised at how many bad middle managers just keep on ticking at Microsoft. It seems it is a lot easier to see individual contributors or even VPs pushed out for incompetence than middle management (Dev Managers, General Managers, Product Unit Managers, Group Program Managers, etc).

It is also surprising how much damage, a well-placed yet broken middle management cog can be in the smooth functioning of the corprorate machine. I've lost count of the amount of times I've asked some member of a team that creates a much reviled project why they product sucked so much and the response would be "our test/dev/general manager didn't see fixing those problems as a high priority". As Derek states, it is even worse when you get the ones that present a false impression to upper management about how things are going. 

Michael Brundage also covered some of this in his essay on Working at Microsoft which is excerpted below

+ Company Leadership

Bill Gates and Steve Ballmer get most of the press, but it's an open secret that all of the division heads (and their staff, and their staff) are top-notch. I'm (happily) oblivious to how that circle operates, so I can only judge them on their results.

Given that Microsoft's been convicted of monopolistic practices, it may shock you when I say that Microsoft's upper management strikes me as very ethical. They talk about ethical behavior all the time, and as far as I've seen, lead by example. Maybe I'm being naive, but I find Microsoft's upper management to be very trustworthy. They're also thinking very far ahead, and doing a good job getting the information they need to make solid decisions.

Microsoft's leaders are also very generous, and frequently encourage the rest of us to make charitable donations (both money and time) a priority. Giving is a large part of Microsoft's corporate culture.

It's refreshing to work at a company where you can trust that the upper echelon is smart, hardworking, and making right decisions. I don't have to worry that my general manager or vice-president will drive our division (or company) into the ground through incompetence or greed. Microsoft's no Enron or WorldCom.

- Managers

In contrast, most of the middle management should be tossed.

Did I mention I've had six or seven managers in five years? I've only changed jobs twice the others were "churn" caused by reorganizations or managers otherwise being reassigned. In fact, in the month between when I was hired and when I started, the person who was going to be my manager (we'd already had several phone/email conversations) changed! It's seven if you count that, six if you don't.

None of these managers were as good as my best manager at NASA. Of the six-seven managers I've had, I'd relish working for (or with) only two of them again. Two were so awful that if they were hired into my current organization (even on another team), I'd quit on the spot. The other two-three were "nngh" -- no significant impact on my life one way or another. I'd love to think this is some kind of fluke, that I've just been unlucky, but many other Microsoft employees have shared similar experiences with me.

I think part of the problem is that Microsoft doesn't generally hire software developers for their people- or leadership-skills, but all dev leads were developers first. Part of the problem is also that (unlike some companies that promote incompetence) good leads are usually promoted into higher positions quickly, so the companies best managers rise to the top. Consequently, the lower ranks are filled with managers who either have no interest in advancing up the management chain (which is fine) or else are below-average in their management skills (which is not).

But it's more complex than this. At Microsoft, many managers still contribute as individuals (e.g., writing code) and are then judged on that performance (which is mostly objective) as much or more than they're judged on their leadership performance (which is mostly subjective). Because individual developers have so much freedom and responsibility, it's easy and typical to give individuals all the credit or blame for their performance, without regard to the manager's impact. Conversely, managers' performance often does not translate into tangible effects for their teams (other than the joy or misery of working for them). For example, I can still get a great review score even if my manager is terrible. I think these factors contribute to management skills being undervalued.

Microsoft also suffers from a phenomenon that I've seen at other companies. I describe this as the "personality cult," wherein one mid-level manager accumulates a handful of loyal "fans" and moves with them from project to project. Typically the manager gets hired into a new group, and (once established) starts bringing in the rest of his/her fanclub. Once one of these "cults" is entrenched, everyone else can either give up from frustration and transfer to another team, or else wait for the cult to eventually leave (and hope the team survives and isn't immediately invaded by another cult). I've seen as many as three cults operating simultaneously side-by-side within a single product group. Rarely, a sizeable revolt happens and the team kicks the cult out. Sometimes, the cult disintegrates (usually taking the team with it). Usually, the cult just moves on to the Next Big Thing, losing or gaining a few members at each transfer.

I think these "cults" are a direct result of Microsoft's review system, in which a mid-level manager has significant control over all the review scores within a 100+ person group (so it's in your best interest to get on his/her good side), and conversely needs only a fraction of that group's total support to succeed as a manager (so it's in his/her best interest to cultivate a loyal fanclub to provide that support). The cult gives the manager the appearance of broad support, and makes the few people who speak out against him/her look like sour grapes unrepresentative of a larger majority. After a string of successes, the manager is nearly invincible.

Fortunately, these managers are unlikely to move further up the ranks, due to the inherent deficiences in their characters (which are usually visible to upper management and enough to prevent their advancement, but not so severe as to warrant firing them).

These "personality cults" always negatively impact the group eventually (while they're there and/or when they leave), but counterintuitively sometimes these personality cults have a large positive initial effect. Many successful Microsoft products have come into existence only through the actions of such personality cults. Some of these products even survived after the personality cult left for the Next Big Thing.

I totally agree with Michael's analysis. Like Derek, I'm unsure as to how one would go about reversing this trend. However I definitely think the way we assess accountability of folks in [middle and executive] management needs an overhaul.


 

Categories: Life in the B0rg Cube

September 22, 2005
@ 03:36 PM

A few days ago I mentioned in my post, Microsoft's Innovation Pipeline, that I suspect that a lot of folks at Microsoft will be looking to try something new after working on large software projects that in some cases have taken 3 to 5 years to ship. I personally have had conversations with half a dozen people over the last month or so who are either looking for something new or have found it.

In his post, Something New, Derek Denny-Brown writes

What kept me at Microsoft, and what I will miss the most, is the people. I worked with such diverse collection of wonderful people... mostly. Not that you can't get that elsewhere, but the 'individual contributors' (as they are called at MS) are really one of Microsoft's assets. I felt like a I was leaving my family. I have worked with some of these people for my entire time at Microsoft. That is a long, and intense, time to build a friendship.

I've had almost everyone I know ask me "why I are you leaving?" Some factors: Whidbey is basically done, as is Yukon. Microsoft definitely is more bureaucratic that it used to be, as well. Mostly though, it was just time to move on. I was presented with an opportunity that fit my interests. (And no, I'm not going to Google... too big. I decided long ago, that if I was going to leave, I wanted it to be for a small company, something less than 100 people.)

Derek is a good friend and I'll hate to see him leave. At least he's staying in Seattle so we'll still get to hang out every couple of weekends. I didn't try really hard to pitch him on coming to MSN but after a second friend [who hasn't posted to his blog about leaving yet] told me he was leaving the company I've switched tactics. All my friends are getting the pitch now. :)


 

Categories: Life in the B0rg Cube

Today I was reading a blog post by Dave Winer entitled Platforms where he wrote

It was both encouraging and discouraging. It was encouraging because now O'Reilly is including this vital topic in its conferences. I was pitching them on it for years, in the mid-late 90s. It should have been on the agenda of their open source convention, at least. It was discouraging, because with all due respect, they had the wrong people on stage. This is a technical topic, and I seriously doubt if any of the panelists were actually working on this stuff at their companies. We should be hearing from people who are actually coding, because only they know what the real problems are.

I was recently thinking the same thing after seeing the attendance list for the recent O'Reilly AJAX Summit. I was not only surprised by the people who I expected to see on the list but didn't but also by who they did decide to invite. There was only one person from Google even though their use of DHTML and IXMLHttpRequest is what started the AJAX fad. Nobody from Microsoft even though Microsoft invented DHTML & IXMLHttpRequest and has the most popular web browser on the planet. Instead they have Anil Dash talk about the popularity of LiveJournal and someone from Technorati talk about how they plan to jump on the AJAX bandwagon.

This isn't to say that some good folks weren't invited. One of the guys behind the Dojo toolkit was there and I suspect that toolkit will be the one to watch within the next year or so. I also saw from the comments in Shelley Powers's post, Ajax the Manly Technology, that Chris Jones from Microsoft was invited. Although it's good to see that at least one person from Microsoft was invited, Chris Jones wouldn't be on my top 10 list of people to invite. As Dave Winer stated in the post quoted above, you want to invite implementers to technical conferences not upper management.

If I was going to have a serious AJAX summit, I'd definitely send invites to at least the following people at Microsoft.

  1. Derek Denny-Brown: Up until a few weeks ago, Derek was the development lead for MSXML which is where IXMLHttpRequest comes from. Derek has worked on MSXML for several years and recently posted on his blog asking for input from people about how they'd like to see the XML support in Internet Explorer improve in the future.

  2. Scott Isaacs: The most important piece of AJAX is that one can modify HTML on the fly via the document object model (DOM) which is known by many as DHTML. Along with folks like Adam Bosworth, Scott was one of those who invented DHTML while working on Internet Explorer 4.0. Folks like Adam Bosworth and Eric Sink have written about how significant Scott was in the early days of DHTML.  Even though he no longer works on the browser team, he is still involved with DHTML/AJAX as an architect at MSN which is evidenced by sites such as http://www.start.com/2 and http://spaces.msn.com

  3. Dean Hachamovitch: He runs the IE team. 'nuff said.

I'd also tell them that the invitations were transferrable so in case they think there are folks that would be more appropriate to invite, they should send them along instead.

It's kind of sad to realize that the various invite-only O'Reilly conference are just a way to get the names of the same familiar set of folks attached to the hot new technologies as opposed to being an avenue to get relevant people from the software industry to figure out how they can work together to advance the state of the art.


 

Categories: Technology

My friend Derek, who's the dev lead for MSXML (the XML toolkit used by practically every Microsoft application from Office to Internet Explorer), has a blog post entitled XML use in the browser where he writes

C|Net has an article on what people have started calling AJAX. 'A'synchronous JavaScript and Xml. I have seen people using MSXML to build these kinds of web-apps for years, but only recently have people really pulled it all together enough, such as GMail or Outlook Web-Access (OWA). In fact, MSXML's much copied XMLHTTP (a.k.a. IXMLHttpRequest) (Copied by Apple and Mozilla/Firefox) was actually created basically to support the first implementation of OWA.

I've been thinking about what our customers want in future versions of MSXML. What kind of new functionality would enable easier/faster developement of new AJAX style web applications? XForms has some interesting ideas... I've been thinking about what we might add to MSXML to make it easier to develop rich DHtml applications. XForms is an interesting source of ideas, but I worry that it removes too much control. I don't think you could build GMail on XForms, for example.

The most obvious idea, would be to add some rich data-binding. Msxml already has some _very_ limited XML data-binding support. I have not looked much into how OWA or GMail work, but I bet that a significant part of the client-side jscript is code to regenerate the UI from the XML data behind the current page. Anyone who has used ASP/PHP/etc is used to the idea of some sort of loop to generate HTML from some data. What if the browser knew how to do that for you? And knew how to push back changes from editable controls? You can do that today with ADO.

Any other ideas? For those of you playing with 'AJAX' style design. What are the pain points? (Beside browser compatibility... )

If you are building applications that use XML in the browser and would like to influence the XML framework that will be used by future versions of Microsoft applications from Microsoft Office to Internet Explorer then you should go over to Derek's blog and let him know what you think.


 

Categories: XML

When I used to work on the XML team at Microsoft, there were a number of people who I interacted with who were so smart I used to feel like I learned something new everytime I walked into their office. These folks include

  • Michael Brundage - social software geek before it was a hip buzzword, XML query engine expert and now working on the next generation of XBox

  • Joshua Allen - semantic web and RDF enthusiast, if not for him I'd dismiss the semantic web as a pipe dream evangelized by a bunch of theoreticians who wouldn't know real code if it jumped them in the parking lot and defecated on their shoes, now works at MSN but not on anything related to what I work on

  • Erik Meijer - programing language god and leading mind behind X# Xen Cω , he is a co-inventor on all my patent applications most of which started off with me coming into his office to geek out about something I was going to blog about

  • Derek Denny-Brown - XML expert from back when Tim Bray and co. were still trying to figure out what to name it, one heckuva cool cat

Anyway that was a bit of digression before posting the link mentioned in the title of the post. Michael Brundage has an essay entitled Working at Microsoft  where he provides some of his opinions on the good, the bad, and the in-between of working at Microsoft. One key insight is that Microsoft tends to have good upper management and poor middle management. This insight strikes very close to home but I know better than to give examples of the latter in a public blog post. Rest assured it is very true and the effects on the company have cost it millions, if not billions of dollars.


 

Categories: Life in the B0rg Cube

Derek has a post entitled Search is not Search where he alludes to conversations we had about my post Apples and Oranges: WinFS and Google Desktop Search. His blog post reminds me about why I'm so disappointed that the benefits of adding structured metadata capabilities to file systems is being equated to desktop search tools that are a slightly better incarnation of the Unix grep command. Derek wrote

I was reminded of that conversation today, when catching up on a recent-ish publication from MIT's Haystack team: The Perfect Search Engine is Not Enough: A Study of Orienteering Behavior in Directed Search. One of the main points of the paper is that people tend not to use 'search' (think Google), even when they have enough information for search to likely be useful. Often they will instead go to a know location from which they believe they can find the information they are looking for.

For me the classic example is searching for music. While I tend to store my mp3s in a consistent directory structure such that the song's filename is the actual name of the song, I almost never use a generic directory search to find a song. I tend to think of songs as "song name: Conga Fury, band: Juno Reactor", or something like that, so when I'm looking for Conga Fury, I am more likely to walk the album subdirectories under my Juno Reactor directory, than I am to search for file "Conga Fury.mp3". The above paper talks a bit about why, and I think another key aspect that they don't mention is that search via navigation leverages our brain's innate fuzzy computation abilities. I may not remember how to spell "Conga Fury" or may think that it was "Conga Furvor", but by navigating to my solution, such inaccuracies are easily dealt with.

As Derek's example shows, comparing the scenarios enabled by a metadata based file system against those enabled by desktop search is like comparing navigating one's music library using iTunes versus using Google Desktop Search or the MSN Desktop Search to locate audio files.

Joe Wilcox (of Jupiter Research) seems to have reached a similar conclusion based on my reading of his post Yes, We're on the Road to Cairo where he wrote

WinFS could have anchored Microsoft's plans to unify search across the desktop, network and the Internet. Further delay creates opportunity for competitors like Google to deliver workable products. It's now obvious that rather than provide placeholder desktop search capabilities until Longhorn shipped, MSN will be Microsoft's major provider on the Windows desktop. That's assuming people really need the capability. Colleague Eric Peterson and I chatted about desktop search on Friday. Neither of us is convinced any of the current approaches hit the real consumer need. I see that as making more meaningful disparate bits of information and complex content types, like digital photos, music or videos.

WinFS promised to hit that need, particularly in Microsoft public demonstrations of Longhorn's capabilities. Now the onus and opportunity will fall on Apple, which plans to release metadata search capabilities with Mac OS 10.4 (a.k.a. "Tiger") in 2005. Right now, metadata holds the best promise of delivering more meaningful search and making sense of all the digital content piling up on consumers' and Websites' hard drives. But there are no standards around metadata. Now is the time for vendors to rally around a standard. No standard is a big problem. Take for example online music stores like iTunes, MSN Music or Napster, which all tag metadata slightly differently. Digital cameras capture some metadata about pictures, but not necessarily the same way. Then there are consumers using photo software to create their own custom metadata tags when they import photos.

I agree with his statements about where the real consumer need lies but disagree when he states that no standards around metadata exist. Music files have ID3 and digital images have EXIF. The problem isn't a lack of standards but instead a lack of support for these standards which is a totally different beast.

I was gung ho about WinFS because it looked like Microsoft was going to deliver a platform that made it easy for developers to build applications that took advantage of the rich metadata inherent in user documents and digital media. Of course, this would require applications that created content (e.g. digital cameras) to actually generate such metadata which they don't today. I find it sad to read posts like Robert Scoble's Desktop Search Reviewers' Guide where he wrote

2) Know what it can and can't do. For instance, desktop search today isn't good at finding photos. Why? Because when you take a photo the only thing that the computer knows about that file is the name and some information that the camera puts into the file (like the date it was taken, the shutter speed, etc). And the file name is usually something like DSC0050.jpg so that really isn't going to help you search for it. Hint: put your photos into a folder with a name like "wedding photos" and then your desktop search can find your wedding photos.

What is so depressing about this post is that it costs very little for the digital camera or its associated software to tag JPEG files with comments like 'wedding photos' as part of the EXIF data which would then make them accessible to various applications including desktop search tools. 

Perhaps the solution isn't expending resources to build a metadata platform that will be ignored by applications that create content today but instead giving these applications incentive to generate this metadata. For example, once I bought an iPod I became very careful to ensure that the ID3 information on the MP3s I'd load on it were accurate since I had a poor user experience otherwise.

I wonder what the iPod for digital photography is going to be. Maybe Microsoft should be investing in building such applications instead of boiling the oceans with efforts like WinFS which aim to ship everything including the kitchen sink in version 1.0.  


 

Categories: Technology

November 4, 2004
@ 06:10 PM

Many times when implementing XML specifications I've come across I've come up against feature that just seem infeasible or impractical to implement. However none of them have given me nightmares as they have my friend Mike Vernal, a program manager on the Indigo team at Microsoft. In his post could you stop the noise, i'm trying to get some rest ... he talks about spending nights tossing and turning having nightmares about how the SOAP mustUnderstand header attribute should be processed. In Mike's post More SOAP Sleepness he mentions having sleepless nights worrying about the behavior of SOAP intermediaries as described in Section 2.7: Relaying SOAP Messages.

This isn't to say I didn't have sleepless nights over implementing XML specifications when I worked on the XML team at Microsoft. One of the issues that consumed a lot more of my time than is reasonable is explained in Derek Denny-Brown's post Loving and Hating XML Namespaces 

Namespaces and your XML store
For example, load this document into your favorite XML store API (DOM/XmlBeans/etc)
 <book title='Loving and Hating XML Namespaces'>
   <author>Derek Denny-Brown</author>
 </book>
Then add the attribute named "xmlns" with value "http://book" to the <book> element. What should happen? Should that change the namespaces of the <book> and <author> elements? Then what happens if someone adds the element <ISBN> (with no namespace) under <book>? Should the new element automatically acquire the namespace "http://book", or should the fact that you added it with no namespace mean that it preserves it's association with the empty namespace?

In MSXML, we tried to completely disallow editing of namespace declarations, and mostly succeeded. There was one case, which I missed, and we have never been able to fix it because so many people found it an exploited it. The W3C's XML DOM spec basically says that element/attribute namespaces are assigned when the nodes are created, and never change, but is not clear about what happens when a namespace declaration is edited.

Then there is the problem of edits that introduce elements in a namespace that does not have an existing namespace declaration:
<a xmlns:p="http://p/">
  <b>
    ...
      <c p:x="foo"/>
    ...
  </b>
</a>
If you add attribute "p:z" in namespace "bar" to element <b>, what should happen to the p:x attribute on <c>? Should the implementations scan the entire content of <b> just in case there is a reference to prefix "p"?

Or what about conflicts? Add attribute "p:z" in namespace "bar" to the below sample... what should happen?
<a xmlns:p="http://p/" p:a="foo"/>

This problem really annoyed me while I was the PM for the System.Xml.XmlDocument class and the short-lived System.Xml.XPath.XPathDocument2. In the former, I found out that once you started adding, modifying and deleting namespace declarations the results would most likely be counter-intuitive and just plain wrong. Of course, the original W3C DOM spec existed before XML namespaces and trying to merge them in after the fact was probably a bad idea. With the latter class, it seemed the best we could do was try and prevent editing namespace nodes as much as possible. This is the track we decided to follow with the newly editable System.Xml.XPath.XPathNavigator class in the .NET Framework.

This isn't the most sleep depriving issue I had to deal with when trying to reflect the decisions in various XML specifications in .NET Framework APIs. Unsurprisingly, the spec that caused the most debate amongst our team when trying to figure out how to implement its features over an XML store was the W3C XML Schema Recommendation part 1: Structures. The specific area was the section on contributions to the Post Schema Validation Infoset and the specific infoset contribution which caused so much consternation was the validity property.

After schema validation an XML element or attribute should have additional metadata added to it related to validation such as it's type, its default value specified in the schema if any and whether it is valid or not according to its type. Although the validity property is trivial to implement on a read-only API such as the System.Xml.XmlReader class, it was unclear what would be the right way to expose this in other representations of XML data such as the System.Xml.XmlDocument class. The basic problem is "What happens to the validity propety of the element or attribute those of all its ancestors once the node is updated?". Once I change the value of an age element which is typed as an integer from 17 to seventeen what should happen. Should the DOM test every edit to make sure it is valid for that type then reject it otherwise? Should the edit be allowed but the validity property of the element and all its ancestors be changed? What if there is a name element with required first and last elements and the user wants to delete the first element and replace it with a different one? How would that be reflected with regards to the validity property of the name element?

None of the answers to the question we came up with satisfactory. In the end, we were stuck between a rock and a hard place so we made the compromise choice. I believe we debated this issue every other month for about a year.


 

Categories: XML

Derek Denny-Brown, the dev lead for both MSXML & System.Xml, who's been involved with XML before it even had a name has finally started a blog. Derek's first XML-related post is Where XML goes astray... which points out three features of XML that turn out to have caused significant problems for users and implementers of XML technologies. He writes

First, some background: XML was originally designed as an evolution of SGML, a simplification that mostly matched a lot of then existing common usage patterns. Most of its creators saw XML and evolving and expanding the role of SGML, namely text markup. XML was primarily intended to support taking a stream of text intended to be interpreted as a human readable document, and delineate portions according to some role. This sequence of characters is a paragraph. That sequence should be displayed with a link to some other information. Et cetera, et cetera. Much of the process in defining XML based on the assumption that the text in an XML document would eventually be exposed for human consumption. You can see this in the rules for what characters are allowed in XML content, what are valid characters in Names, and even in "</tagname>" being required rather than just "</>".
...
Allowed Characters
The logic went something like this: XML is all about marking up text documents, so the characters in an XML document should conform to what Unicode says are reasonable for a text document. That rules out most control characters, and means that surrogate pairs should be checked. All sounds good until you see some of the consequences. For example, most databases allow any character in a text column. What happens when you publish your database as XML? What do you do about values that include characters which are control characters that the XML specification disallowed? XML did not provide any escaping mechanism, and if you ask many XML experts they will tell you to base64 encode your data if it may include invalid characters. It gets worse.

The characters allowed in an XML name are far more limited. Basically, when designing XML, they allowed everything that Unicode (as defined then) considered a ‘letter’ or a ‘number’. Only 2 problems with that: (1) It turns out many characters common in Asian texts were left out of that category by the then-current Unicode specification. (2) The list of characters is sparse and random, making implementation slow and error prone.
...
Whitespace
When we were first coding up MSXML, whitespace was one of our perpetual nightmares. In hand-authored XML documents (the most common form of documents back then), there tended to be a great deal of whitespace. Humans have a hard time reading XML if everything is jammed on one line. We like a tag per line and indenting. All those extra characters, just there so that our feeble minds could make sense of this awkward jumble of characters, ended up contributing significantly to our memory footprint, and caused many problems to our users. Consider this example:
 <customer>  
           <name>Joe Schmoe</name>  
           <addr>123 Seattle Ave</addr> 
  </customer>
A customer coming to XML from a database back ground would normally expect that the first child of the <customer> element would be the <name> element. I can’t explain how many times I had to explain that it was actually a text node with the value newline+tab.
...
XML Namespaces
Namespaces is still, years after its release, a source of problems and disagreement. The XML Namespaces specification is simple and gets the job done with minimum fuss. The problem? It pushes an immense burden of complexity onto the APIs and XML reader/writer implementations. Supporting XML Namespaces introduces significant complexity in the parsers, because it forces parsers to parse the entire start-tag before returning any text information. It complicates XML stores, such as DOM implementations, because the XML Namespace specification only discusses parsing XML, and introduces a number of serious complications to edit scenarios. It complicates XML writers, because it introduces new constraints and ambiguities.

Then there is the issue of the 'default namespace’. I still see regular emails from people confused about why their XPath doesn’t work because of namespace issues. Namespaces is possibly the single largest obstacle for people new to XML.

My experiences as the program manager for the majority of the XML programming model in the .NET Framework agree with this list. The above list hits the 3 most common areas people seem to have problems with working with XML in the .NET Framework. His blog post makes a nice companion piece to my The XML Litmus Test: Understanding When and Why to Use XML article on MSDN.


 

Categories: XML

Today Arpan (the PM for XML query technologies in the .NET Framework) and I were talking about features we'd like to see on our 'nice to have' list for the Orcas release of the .NET Framework. One of the things we thought would be really nice to see in the System.Xml namespace was XPath 2.0. Then Derek being the universal pessimist pointed out that we already have APIs that support XPath 1.0 that only take a string as an argument (e.g. XmlNode.SelectNodes) so we'd have difficulty adding support for another version of XPath without contorting the API.

Not to be dissuaded I pointed out that XPath 2.0 has a backwards compatibility mode which makes it compatible with XPath 1.0. Thus we wouldn't have to change our Select methods or introduce new methods for XPath 2.0 support since all queries that used to work in the past against our Select methods would still work if we upgraded our XPath implemention to version 2.0. This is where Arpan hit me with the one-two punch. He introduced me to a section of the XPath 2.0 spec called Incompatibilities when Compatibility Mode is true which reads

The list below contains all known areas, within the scope of this specification, where an XPath 2.0 processor running with compatibility mode set to true will produce different results from an XPath 1.0 processor evaluating the same expression, assuming that the expression was valid in XPath 1.0, and that the nodes in the source document have no type annotations other than xdt:untypedAny and xdt:untypedAtomic.

I was stunned by what I read and I am still stunned now. The W3C created XPath 2.0 which is currently backwards incompatible with XPath 1.0 and added a compatibility mode option to make it backwards compatible with XPath 1.0 but it actually still isn't backwards compatible even when in this mode?  This seems completely illogical to me. What is the point of having a backwards compatibility mode if it isn't backwards compatible? Well, I guess now I know if we do decide to ship XPath 2.0 in the future we can't just add support for it transparently to our existing classes without causing some API churn. Unfortunate.

Caveat: The fact that a technology is mentioned as being on our 'nice to have' list or is suggested in a comment to this post is not an indication that it will be implemented in future versions of the .NET Framework.


 

Categories: XML

I just read Jon Udell's post on What RSS users want: consistent one-click subscription where he wrote

Saturday's Scripting News asked an important question: What do users want from RSS? The context of the question is the upcoming RSS Winterfest... Over the weekend I received a draft of the RSS Winterfest agenda along with a In an October posting from BloggerCon I present video testimony from several of them who make it painfully clear that the most basic publishing and subscribing tasks aren't yet nearly simple enoughrequest for feedback. Here's mine: focus on users. .

Here's more testimony from the comments attached to Dave's posting:

One message: MAKE IT SIMPLE. I've given up on trying to get RSS. My latest attempt was with Friendster: I pasted in the "coffee cup" and ended up with string of text in my sidebar. I was lost and gave up. I'm fed up with trying to get RSS. I don't want to understand RSS. I'm not interested in learning it. I just want ONE button to press that gives me RSS.... [Ingrid Jones]

Like others, I'd say one-click subscription is a must-have. Not only does this make it easier for users, it makes it easier to sell RSS to web site owners as a replacement/enhancement for email newsletters... [Derek Scruggs]

For average users RSS is just too cumbersome. What is needed to make is simpler to subscribe is something analog to the mailto tag. The user would just click on the XML or RSS icon, the RSS reader would pop up and would ask the user if he wants to add this feed to his subscription list. A simple click on OK would add the feed and the reader would confirm it and quit. The user would be back on the web site right where he was before. [Christoph Jaggi]

Considering that the most popular news aggregators for both the Mac and Windows platforms support the feed "URI" scheme including  SharpReader, RSS Bandit, NewsGator, FeedDemon (in next release), NetNewsWire, Shrook, WinRSS and Vox Lite I wonder how long it'll take the various vendors of blogging tools to wake up and smell the coffee. Hopefully by the end of the year, complaints like those listed above will be a thing of the past.


 

Categories: RSS Bandit | Technology

It must be fun being a rapper. What other job do you have where your kids listen to the radio and ask you 'Why is Ja Rule some punk dissing you?' and not only do you get to respond to with a diss of your own but get paid in the process. That's like getting paid for being in an online flame war. Money for nothing, I want my MTV.

Speaking of MTV I saw another episode of Punk'd last night and I thing the show is pure genius. Candid camera for the MTV generation.

Thoughts on organizing a roadtrip to Applied XML DevCon West, followup to the Nigerian presidential elections posts, and how MSDN folks are handling feedback on their recent redesign.

Poll: Which Talk Would You Prefer To See At Applied XML DevCon?

 


 

Categories: