Sr. Director, Product Research, Epicor Software

Software Architecture

This blog is not associated with my employer.

Friday, September 28, 2007

Software Architecture as Principles

So, I had The Talk with my 10-year old son today. He was a confused and even became a bit emotional as the gravity of the facts emerged. The discussion was, of course, about copyright law.

He had burned a CD containing tracks from his iTunes library to give to a friend as a birthday gift. So, we talked about how this was in fact stealing and that we should just go buy a new copy the music outright, etc. Here’s a sampling of from his questions during the discussion:

Q: What if my friend has some songs on his iTunes, but his CD burner is broken. I have the same songs, so can I burn them from my computer and give him the CD? A: Um, maybe that’s OK. I don’t know.

Q: How come it’s OK to lend out PlayStation disks? A: Easy (whew!), because while they borrow it, you are not using it (note to self: is that really legal?).

Q: So, can I burn a CD, give it to someone, and just not listen to the songs myself until I get the CD back? A: Um, you’re late for school – off you go.

Like software architecture (and legal systems), copyright is a principles-based rather than a rules-based concept because it’s impossible to precisely spell out, up-front, all actions that constitute non-compliance. The principle says you can’t disseminate the work of others without permission. Laws assign penalties to broad forms of violations like producing counterfeit software. Court precedents over time develop the more specific lists of what’s OK (backing up your iTunes) and not OK (giving out copies).

Software architecture works the same way. Software architectures are collections of principles that define guidance and boundaries for producing software systems. Lower-level guidance comes in the form of standards and practices for developing a software system that conforms to the principles in the architecture.

Principles-based regulation means that laws are enforced by examining the intent of people rather than reconciling deeds against a roster of specifically banned actions. The Enron scandal, it’s said, grew unnoticed for years because Enron parsed the regulations in bad faith and created scenarios that somehow convinced Author Anderson to sign off on audits. Enron and Arthur Anderson both knew that accounting principles (and the law) were being violated but felt relatively safe because any by-rote analysis of their accounts against the rules (as written) would come up clean.

UK regulators like to say they have no Enron situations because UK accounting standards are principles-based from the outset. I don’t know how true that is, but the infamous U.S. Sarbanes-Oxley Act of 2002 directs the U.S. Securities & Exchange Commission to examine the feasibility of moving the U.S. to a principles-based accounting standards system.

Getting (at last) back to software architecture, I work for an independent software vendor (ISV) in the enterprise resource planning (ERP) market. One of the characteristics of an ISV is that we are highly principles-based and generally don’t rely on thick volumes of specific details telling our engineers exactly what to do. Sure, we have standards around code and UI styles. But developers are taught the principles within the product architectures and the intent behind the functionality. That in turn helps prevent issues like “the spec didn’t specifically say the shipment quantity had to be a positive number”.

As we expanded product development overseas, we didn’t rely on outsourcing to contractors. We tried it, but it was too hard to convey the architectural principles and the business intent to a transient staff 13.5 time zones away. Without the principles as context, we had to type reams more specs and it the results consistently had to be reworked. We wondered whether our processes were just plain sloppy. But that wasn’t the case. Our development model just didn’t fit an environment where people came only with bespoken capabilities and never developed any lasting interest in our objectives.

Instead, we opened development offices staffed by our own full-time employees. That meant we could convey our architectural principles, development standards, and train the teams up on business functions like cost accounting and manufacturing. Permanent employees doing the work, no matter where in the world, turned out to be cheaper than just outsourcing to a contractor. More importantly, we realized much better agility.

The problem with developing long lists of rules is that they are expensive to maintain and easy to evade when the pressure is on. I would rather state a principle like “creating new records for purpose X MUST minimize the DB batch request count and NOT hold locks on Y, under typical loads, to significantly affect other processes” and then test for compliance rather than using, say, code reviews and a checklist to attempt to spot things that potentially violate the principles.
For us – a 400-person development organization – agility means efficiently releasing products year-after-year that incorporate rapidly changing markets. Believe it or not, technology shifts aren’t nearly as acute to us as market shifts from compliance and globalization. I never need to find 150 Ruby programmers on a moment’s notice. I need 150 people to understand how a chart of accounts works and how to make one that can be legally used worldwide.

So, while we don’t do many scrums in our product development cycle, we also don’t do waterfall management. The middle ground works because our jobs revolve around incremental evolution of a few systems. It’s an easy place for principles-based management to work. Software architecture perfection to me is nothing but a short list of general capabilities and a long list of non-functional requirements. Those are the principles. Standards and practices do the rest.

My son says he likes knowing how music is copyrighted. The thought he might be letting down people who really own the recordings actually horrified him (much more than me, anyway). By understanding the principles, he can (hopefully) figure out for himself what scenarios violate the law. Now, I guess we’ll have to do the “fair use” talk pretty soon, though. What other talks am I forgetting about?

Wednesday, September 19, 2007

Corporationlets

In the Financial Times on 13 July 2007 (European Edition, special insert), Linda Gratten asserts that collaboration is poised to upstage competition as a primary business strategy – that partnerships can create value more efficiently than pure competition. The X and Y generations were born wired to the Web, have been raised in an “everyone wins” mentality, and are averse to organizational hierarchies. So, once that new generation rises to corporate power, partnerships will form the dominant agent of business value creation. That last bit smacks me as unsubstantiated age discrimination – but creating value via joint ventures obviously works.

Since the focus of the piece was collaboration, describing Second Life as a vehicle for collaboration couldn’t be helped. Gratten mentions the “experiments” at IBM (“very successful”) to use Second Life for team interaction. Given that the GenY mastery of social networks is so entrenched, something akin to Second Life could become the wheel grease of tomorrow’s joint ventures. After all, any mass proclivity toward partnerships requires a place where participants can sniff each other out – an online speed-dating environment for stakeholders.

A gold rush has started to converge social networking and business strategy. The idea has moved beyond making money by connecting people together – that’s so last month. People need to make money for themselves by easily forming ad-hoc commercial partnerships. That kind of value creation can reap a huge economic benefit, especially if small businesses become enfranchised. So, the European Commission is trying to act on this vision.

The EU25 is home to 23 million individual businesses where more than 90% are “micro-enterprises”. In the EU15, only 10% of businesses have any computer integration with either side (supply and demand) of their supply chain*. Connecting these 23 million companies, so they can inexpensively form partnerships and new supply chains, is in the EC’s economic thinking. It also ties back to Gratten’s point about next-generation entrepreneurs choosing joint venture strategies over brute competition.

Micro-enterprises – like your corner laundry – are not generally capable enough to build inter-enterprise orchestrations. But there is one IT skill that the entire population has in-hand: Web browsing. So, the EU is smitten with the notion that Web 2.0, the Social Web, Second Life-ish solutions, SOA, and SaaS can all come together to form a solution – and so the European Commission is funding efforts [cordis.europa.eu] toward that goal. It’s a Grand Unification where businesses find each other and collaborate like Second Life players while the business transactions are easily, properly, and legally pumped across business entity boundaries.

It’s certainly my nature to drool over the software architecture challenges in building this dream. But my biggest worry, if I was picked to do the software estimate**, is determining when you’ve done enough to displace existing business practices.

Supply chain commerce, commodity purchasing, service industries, and consumer businesses all have buyers and sellers. But the operational characteristics of how buyers find sellers, how trust chains are established, how contracts are executed, and how payments are managed are highly evolved and optimized within each sector. Can a solution that undermines the efficiencies in business growth developed during the last century be successful?

Here’s an example: A few decades ago, businesses (especially manufacturers) kept many active suppliers on file. It was believed that having many suppliers for a widget meant more competition for your business kept purchase prices low. But it was later realized that reducing the number of widgets kept on-hand inventory improved profits much more than buying your widgets at rock-bottom price.

Just-in-time (JIT) manufacturing was invented for this reason. Material arrived at the precise moment and location where it would be consumed – which meant daily, weekly, or ad hoc replenishments. Perfection meant having zero widgets sitting in your warehouse. Purchasing strategies were simplified –you only have 1 or 2 suppliers on long-term contracts rather having to manage bidding cycles with multiple suppliers. But there were also the matters of trust and reliability. If a supplier failed to deliver, production would likely shutdown and employees would be furloughed.

The supply chain partnerships in JIT operations created more value than traditional brute competition. Manufacturers eschewed continuously fighting to get the best deal from a transient supplier to create strategic arrangements with a single supplier. Boeing and Airbus have gone even farther by having their suppliers share risk in major programs. Suppliers absorb some up-front costs in exchange for revenue to come later if the airplane sells well in the marketplace.

But doesn’t that fly in the face of the model the EU is envisioning? The goal is to foster easy, ad-hoc partnerships with extremely low cost of entry. However, manufacturers revolutionized their businesses by getting away from a scattered roster of transient. Is the EU is looking for technology to bring enterprises together despite entrenched, non-technical aspects?

The answer involves, as usual, a tipping point. Some sort of foundation technology – the goo in the Petri dish that feeds life – needs to be developed (no small task). Then, an entire population has to learn that partnering is feasible, online or not, and they have to know how to do it legally. Finally, participants have to know how to measure the quality of a trust chain.

From a business-to-business perspective, this is the final frontier of the Internet. It’s achievable and inevitable if the IT industry can get non-technical stakeholders into the game.

* These figures are from Eurostat via a keynote presentation by Christina Martinez of the European Commission at the 2007 WS-I Spring Plenary in Brussels.

** My estimating practices were pretty much codified here in 2005.

Thursday, September 13, 2007

REST by way of SOAP

It’s OK to love REST (as I do), but I try to not let it blind me completely from WS-*. As I’ve often said, choosing REST – or any resource-oriented architecture – means you’ve decided to adopt a uniform interface. That does not necessarily mean dropping WS-*. Unfortunately, SOAP 1.0 emphasized an RPC programming model even though a pure message-based model was included as well. SOAP v1.2 sort-of/kind-of attempted to make amends:

Some early uses of [SOAP 1.1] emphasized the use of this pattern as means for conveying remote procedure calls (RPC), but it is important to note that not all SOAP request-response exchanges can or need to be modelled [SIC] as RPCs.

So, SOAP from the beginning identified two payload models: the Document model indicated that the payload is simply a message. The RPC model meant that elements within the payload should be mapped to functions and parameters.

SOAP also included a basic type system – SOAP Encoding – to help implementers know how to serialize objects defined – as Don Box might put it – in languages that use dots. It had a rather short life because the W3C was about to deliver XML Schema and having 2 type systems in SOAP wasn’t going to be helpful. For some reason, using XML Schema to define a message format was called Literal.

So, SOAP messages could be considered RPC-Encoded, RPC-Literal, Document-Encoded, or Document-Literal.
The WS-I Basic Profile working group unanimously banned Document-Encoded and RPC-Encoded because they were obsolete, leaving *-Literal in play.

But what people generally don’t know is that some members including Microsoft and IBM argued hard to go farther and eliminate RPC-Literal as well. Tim Ewald had said “the message is the task” meaning a payload identified with a namespace URI is all you need to dispatch a call. But others (like JAX-RPC users) argued that keeping RPC and was critical for the adoption of web services.

In an attempt to mollify the RPC advocates, Tim, Don Box, and Keith Ballenger went so far as to write up a spec for mapping object graphs to an XML tree so implementers could align their type builders and serializers using a common interpretation of a message schema. The idea was to standardize an object-graph to XML conversion and include it with the WS-I Basic Profile.

For bureaucratic reasons, the WS-I couldn’t publish it (but you can find it:
USPTO Application 20040239674). BTW, think about how useful that (now patent-pending) spec would be today in standardizing JSON/XML conversions. It might have also helped out these guys [W3C XML Schema Patterns for Data Binding].

So, the SOAP Document/Literal approach was an early favorite approach to web services – and its closer to REST than many realize. It standardized on GET and POST, which certainly aligns with my thinking. The URL for a GET can easily be cast as a straight resource fetch. The URL for a POST is generally cast as a “processing endpoint”, which reflects real-world thinking, IMO (you never really PUT what you GET). That leaves dispatching – invoking the right code when a message is received.

Both REST and WS-* users can tell you it’s difficult to rely only on the value of a URL path to determine how to process a message. Sooner or later, you have to look at the payload. The original SOAP mechanism involved an HTTPHeader called SOAPAction, which was a hint instructing the application about how to process the message. Many REST implementations use the content-type HTTP header in exactly the same way (which technically violates the HTTP 1.1 spec, BTW).


Using an HTTP header value to indicate what your payload is makes some sense if your payload can be anything in any format. But if your payload is XML (or a well-formed JSON array), then why not dispatch the call based solely on the (qualified) name of the root element? The only real difference between SOAP and REST here is that SOAP has an envelope construct and your data goes inside an element called “body”. If it’s a non-SOAP call, don’t use an envelope – it’s OK.

What about the WSDL? Don’t use one unless you have the need to build RPC-oriented proxies for your service. All you really need is a schema for the message format and away you go. If your endpoint is HTTP (if you’re looking at REST, when is it not?), you don’t even need a proxy built for you at all – another reason REST is getting popular.


In my REST work, I’m keeping an eye out to avoid straying too far from the bits of WS-* that might come in hand sooner rather than later. Security is my biggest worry. I don’t want to architect around the notion that transport level security is all I can have. I might need to sign that XML and encrypt bits of it anyway. I might, someday, actually come across an intermediary processor. Maybe that header element can be useful.

Most of WS-* is now beyond my comprehension and I don’t see ever using major parts of the stack because the toolkits won’t support resource-orientation to the degree I need. But HTTP, SOAP, the WS-I Basic Profile, and XML Schema can be useful even in completely RESTful projects. If you want.

Archive