Friday, July 22, 2005
I was later asked how the development team adjusted to a world without strong types. Was there a riot? Actually, there were worries about type safety, conversion between XML simple types and C# intrinsic types (yes, dates had to be closely managed), or the lack of a nice object hierarchy.
There was one complaint, though: no Intellisense.
Is that all we need to really send XML, web services, etc. to some next level of adoption? So, for all of the Visual Studio and Eclipse dudes out there, how hard could it be to attribute some DOM type with a schema QName and then get some IDE help when creating an XPath statement? If it’s doable, let me know!
One other issue was that Intranet single-sign-on is still hard and the solutions are very platform-specific. WS-Security is probably a great next step, but unless it gets implemented in the browsers (part of XMLHTTPRequest?) it will have to come by way of a plug-in.
Some people look at the DOM API and see a big stick of ugly. But the complaining dies down after realizing that XML messages have interesting, interoperable capabilities to compose messages. The skillsets are also reusable (meaning lucrative) in web programming.
Thursday, July 14, 2005
I attended the W3C Workshop on XML Schema 1.0 User Experiences (the program and minutes are here). My job was to represent the position of the WS-I, where I chair the XML Schema Work Plan Working Group. It was my first W3C event. It went well and I appreciate the W3C having a public event like this. What I didn’t expect was so much new support for profiling the XML Schema spec.
Just to restate the problem: XML Schema has constructs not easily expressible in most programming languages or databases. But I (and others) don’t see this is as an interoperability problem – it’s a toolkit problem. Despite some shortcomings, XML Schema can unambiguously describe the format of an XML document. So, the idea of creating an XML Schema “profile” to improve web service interoperability doesn’t add up to me.
Microsoft proposed that toolkit vendors should fully support the XML Schema spec (whew!), but that a number of “domain-specific” profiles would be “useful”. SAP said that XML Schema profiles are “inevitable and useful for language bindings, business vocabularies, and for specific user communities”.
So, the W3C might start looking into if/when/how to develop a set of domain-specific profiles. Is this a good idea? Are web services such a domain unto itself? Will it be ironic when schema profiles are authored using xs:redefine?* On the surface, it sounds useful, but let me overreact in my (usual) conflicted manner anyway.
On one hand, it would be nice if the Microsoft, IBM, Sun, and BEA toolkits could accurately reconstitute types from each other’s generated schema documents. And if you ask them, this is what they say their customers want.
But what if the WS-I gets pressure to add XML Schema profile conformance to the Basic Profile? It’s easy to see how this would happen. There are WS-I members who insist problems using schema in programming languages is a huge problem in developing web services. Also, the WS-I Board is dominated by toolkit vendors who have permanent seats and would like some sort of collective win to come of all this.
This could cause already tenuously adopted standards to diverge altogether. Many will not like a standards organization (whichever one) dictating a programming model, which in effect such a profile would do. And we all know there are already major, public web service implementers who more or less avoid the WS-* world altogether.
I am hopeful that RPC semantics will fade significantly from web services development by the time a language binding profile for XML Schema makes it to toolkits. Loose coupling means much more than “you don’t need my binaries to call my code”. It’s about building services that can evolve robustly and safely. One day, toolkit vendors will see that binding messages (or their parts) to static types – while seemingly the Holy Grail today – actually gets in the way of loose coupling and dynamic systems.
Ironically, XML is probably more natural than an object graph in representing a lot of real-world data. In fact, schema-relational mapping is actually easier to accomplish than object-relational mapping. Isn't it strange that people complain about the overhead of XML Schema validation, but have no problem with having their messages shredded value-by-value into a statically-typed object graph which is then re-shredded into a bunch of SQL? Why so much focus on the lowest-value part of the process?
Improving toolkits in order to make XML processing more amenable to today’s object-oriented approaches is NOT** the way to achieve better service-oriented solutions. Concocting a subset of the XML Schema specification just to provide more convenient programming experiences is a capitulation in the name of the wrong goal.
* This bit of attempted humor is homage to my friend David Ing, who also has seemingly proven that footnotes are indispensable in blogs.
** An earlier version of this post was missing this rather important word!