More On Corporate
Secretes
Since Scoble posted his original entry on
corporate secrets at least two people have made a
comment I had in my initial response but thought
better of and edited out. Both
Eric Gunnerson(a PM on the C# team) and
John Dowdell (a Macromedia guy) point out that
a good reason to keep product information secret is
that until you've actually shipped it to the
customer things can change.
Even though I've only been on the inside of the
B0rg cube about 1.5 years I've already seen quite a
few projects or initiatives go from "We're
definitely going to ship this" to "It's no longer a
high priority" to "It's cut". From bug fixes and
minor API tweaks to significant technologies that
have the technology press buzzing, previously green
lighted projects can end up being axed at what
sometimes looks like the drop of a hat but
typically has some practical sounding reasons(like
manpower, time constraints or negative initial
customer feedback).
It's now really easy to see how and why the B0rg
often gets tarred with the vaporware brush by
detractors.
#
Better
Together
Don Box has
a post where he states that it takes 3 to 4 years
to ship something like .NET Framework + VS.NET.
Although at first glance his statement seems to be
in conflict with the recently posted
Microsoft Developer Tools Roadmap 2003-2005
from my perspective Don paints an accurate picture.
At the current pace, it'll probably take three to
four years for every feature I've either specced or
proposed within the past few months I've been a PM
to eventually make it to customers. Some will show
up in the next release, some may show up the
release after while others may never show up at
all.
There are a number of reasons for this situation
including resource constraints and scheduling
issues. One reason that keeps turning itself over
and over in my mind which Don's post brought
bubbling to the surface is the doctrine of
Better Together which I mentioned
in a post in March in response to
David Stutz's farewell email. I wroteThere is another perspective to the
above quotes I've been thinking about for the
past weeks ever since David initially forwarded
me the email. Spurred by development at "Internet
time" epitomized where by
companies like Netscape during the Dot Bomb boom
and the Open Source community[,] the software
industry for the most part is embracing the
practice of releasing early and releasing often.
However a business model that is based on your
various components working "better together" and
being a "unified platform" is essentially stating
that this software will not [be] released
often when compared to the rest of the software
industry.
I'm of two minds about the approach taken by the
Don's team to tackle this issue. On the one hand,
it's rational to think that in the rapidly evolving
world of XML technologies that shipping at the rate
of thrice a decade doesn't seem feasible and will
either leave you behind the industry or have you
shipping stuff too early to meet your product
cycles (e.g. like considering shipping a SOAP 1.2
implementation before the spec was a W3C
recommendation). On the other, it is a lot harder
for other components or products to take
dependencies on your bits if they are not a part of
the overall framework but instead are a separate
component. This reminds me of the Java world where
it isn't unusual to see applications that although
are Write Once, Run Anywhere with regards to the
JVM and core Java classes face problems if third
party components (like XML parsers, etc) are not
the right version on the target machine. So
applications not only need to make sure they are
tested against particular JVMs but also about
interactions between different versions of third
party components as well.
#On Strongly Typed
Infosets
I saw
Ted Neward's response to
my comments about his article on
Strongly Typed Infosets in the .NET Framework.
Further comments below.
"his approach means modifying your classes to
subclass DOM nodes to get the behavior he
wants." Which I do note is a consequence of
using this approach, do I not? I'm not suggesting
that this is an approach that everybody using the
System.Xml namespace must adopt immediately upon
pain of death; I'm suggesting that this is one
approach to "having your cake and eating it too"
assuming you're willing to put up with the
consequences. That's a classic pattern approach.
I don't consider "my classes must subclass
XmlElement and XmlDocument" having my cake and
eating it too. C# classes generated based on the
schema for an XML document are strongly typed
whereas the DOM is weakly typed. Besides the fact
that you now have to alter your class hierarchy to
implement the solution described by the article the
fact is that you have
not created a strongly
typed infoset but a weakly typed one instead. If
there is some confusion about why this is weakly
typed and not strongly typed read this brief
description of strong vs. weak typing.
Now consider that in the article the element
declaration for the
age
element is
<xs:element type="xs:int" name="age"
minOccurs="0" maxOccurs="1" />
and note that since the Person class subclasses
from XmlElement one can set the value of the
age
element using regular DOM methods
and there is no enforcement that the value is
actually an int or that one doesn't add child nodes
to the
age
element. This means that
the Person class is not really strongly
typed.
There are ways to improve the implementation and
usage pattern for the Person class such as
providing strongly typed setter properties for the
class instead of just getters and ensuring that
when loading or saving the class this is done
through an XmlValidatingReader and not just an XML
reader so the contents of the document are checked
against the schema.
Of course, it should be noted although the built-in
Object<->XML mapping technology in the .NET
Framework (XSD.exe aka the XmlSerializer class)
provides access to XML documents as strongly typed
objects the generated objects are not as strongly
typed as the XSD schema mandates due to impedence
mismatches between the CLR type system and that of
W3C XML Schema. I'm actually on the hook to write
an article on exactly what XSD constraints are
unenforced by the XmlSerializer class since I made
a verbal promise to
Doug that I'd do
this a while ago.
"There is also the fact that he talks about
using XML namespaces as a versioning mechanism
when in fact it is anything but." Actually, I
thought I was emphasizing the idea that
namespaces can be used as a way of offering
evolutionary freedom to an XML document, but
frankly that's more tangential to the point of
the paper itself, so probably isn't worth
debating here at the moment
I didn't see anything about namespaces providing
evolutionary freedom but did notice the fact that
his class code was brittle enough that when he
"versioned" the XML using namespaces the Person
class's code had to be altered to work with the new
version. Using XML namespaces as a versioning
mechanism breaks both forwards compatibility (old
applications can read the new format) and backwards
compatibility (applications that read the new
format can read the old one).
"I'd have built an ObjectXPathNavigator which
enables you to treat an arbitrary object graph as
an instance of the XPath data model."
Absolutely! Again, nothing stops you from doing
this, although the XPath support within the
XmlDocument-and-friends representation is
somewhat optimized in ways that an
ObjectXPathNavigator might not be, and this gives
you just XPath navigation--not the silent
inclusion of evolutionary data that the
strongly-typed Infoset approach gives you. XPath
is nice, but it's only one of the listed
advantages.
Actually the ObjectXPathNavigator approach allows
you to create truly strongly typed infosets as
opposed to the weakly typed infosets described in
the article. With the ObjectXPathNavigator approach
I can
- Write a schema with well defined points of
extensibility using the xs:any and
xs:anyAttribute wildcards
- Map the schema to strongly typed C# classes
using the XmlSerializer class which maps the
wildcards to instances of the one or more
instances of the XmlNode class. Meaning I have
well defined points where my object model is
accessible as weakly typed or untyped XML and
where it is accessible as strongly typed C#
objects.
- Use ObjectXPathNavigator when I want to treat
the C# objects as an XML infoset such as query it
with XPath or transform it with XSLT.
This sounds a lot more like having my cake and
eating it too. Along with the added benefit that my
classes don't have to be derived from some XML node
classes.
"There are some issues with the implementation
in the article such as the fact that it doesn't
handle nested XML in the way people would expect
(e.g. if your class has a property of type XML
node)" I'm not sure what you mean by this,
and would love to include any edge cases in a
future rev of the paper. (Translation: if you
send me an example and a brief explanation of the
behavior that will be counterintuitive, I'll put
it into the paper and re-release it ASAP.)
"and the fact that one can't customize the XML
view shown by the ObjectXPathNavigator (for
example by annotating the class with attributes
from the System.Xml.Serialization
namespace)." Again, I'm not exactly sure what
you mean here--can you elaborate?
Both my comments were addressing limitations of the
implementation of ObjectXPathNavigator on the
Extreme XML column rather than Ted's
code.
The first comment references the fact that the
implementation of ObjectXPathNavigator provided on
MSDN does not handle nested instances of
IXPathNavigable. This means that a strongly typed
C# class whose fields or properties are instances
of XmlNode, XPathDocument or any other instance of
IXPathNavigable would not have them recognized as
nested XML infosets during navigation with the
ObjectXPathNavigator.
The second comment was about the fact that the
ObjectXPathNavigator has a default mapping of
fields/properties to XML which one may want to
customize by annotating the class with the
attributes from the
System.Xml.Serialization namespace used to
describe Object<->XML mappings to the
XmlSerializer.
I'll attempt to address both issues in a future
Extreme XML column.
#
DEDICATIONS: This K5 diary entry is dedicated to
this post in my previous diary.
--
Get yourself a
News Aggregator and subscribe to my
RSSfeedDisclaimer:
The above comments do not
represent the thoughts, intentions, plans or
strategies of my employer. They are solely my
opinion.