Sr. Director, Product Research, Epicor Software

Software Architecture

This blog is not associated with my employer.

Thursday, August 25, 2005

Patterns for SOA 2.0

[23 June 2007: A hopefully clearer rewrite is here.] My dad is a research biochemist at an institute in Southern California. One of the (literally) cool things about his laboratory is the cold room. Going through the huge vault-like door, there are some tall glass tubes a couple of inches in diameter and several feet high. In the glass columns are various kinds of cloudy goo. Over time, colored bands of (I think) concentrated proteins and other molecular constituents appear. This event tended to cause some cordial celebration to occur, which I never really appreciated. But looking back, these scientists would sit back, wait, and then witness specialized things emerge naturally from a process of self-refinement.

Think about the move over the last 4 decades from monolithic applications to service-orientation. Instead of having gravity and osmosis do the work, we have trial, error, and debate. But software architecture is itself self-refining process nonetheless. By definition an architecture pattern is a reflection of an existing practice. I haven’t become nearly familiar with software architecture patterns enough to where I can recite them at cocktail parties. But it seems like the classic process of describing an architecture in terms of patterns is not helping us pin down the formal properties of SOA. Some argue that this proves SOA isn’t real because it can’t be described.

But maybe we are just looking for patterns in the wrong way. Physical SOA implementations will probably use well-known patterns like observer-subject, content-based routing and pipelines. However, I think that describing a modern SOA requires a deeper analysis centered on transformational patterns rather than classic software architectural or integration patterns.

SOA, well at least a 1.0 version, has been around for quite a while as any systems integrator will tell you for $225 an hour. Enterprise service buses, object brokers, and other agent-oriented have been successfully fooling monolithic applications into working with each other for years. Web services also put agent-oriented systems and services into people’s faces rather suddenly and on a mass scale. But version 1.0 of SOA was geared primarily toward aggregating otherwise inert systems and providing some new communication channels.

But now, many see a need for a more modern SOA, which I’ll call SOA 2.0 – where frameworks, applications, agents and communication channels understand each other more deeply – ideally using more aspect-oriented approaches. In short, the new SOA is about building a smarter stack and designing applications to take advantage of new constructs that (we hope) promote agility and simplicity.

At its core, SOA 2.0 uses graph transformation mathematics to convey semantics throughout the SOA stack’s layers as executable logic. At each layer of the stack, you define some semantic categories (a.k.a. viewpoints or aspects) and develop transformation patterns that produce hard rules that specific SOA layers can execute dynamically. Semantic category assignments are stored in metadata and transforms are implemented as engines or (less appealingly) as code generators.

This is what makes the architecture stack much smarter – semantics of the underlying application requirements are pulled through the solution mechanically. More importantly, as application semantics evolve over time, the solution itself evolves automatically from top to bottom. The idea is to get deployment overhead to approach zero.

Here is an admittedly simplistic example (italics represent some canonical SOA layers): In the data domain you could define a semantic category called “data representation” that describes whether a domain entity is resource data (customers, parts), activity data (orders, timesheets), or reference data (sales analysis), which is read-only. You then define a transform template called CRUD that produces interfaces for invoking functions in the application domain.

The template can be smart enough, for example, to avoid producing operations for reference data entities other than “Retrieve” (the “R” in CRUD). In turn, the agent layer has a transform template to create sets of actions (conceptually like a SOAPAction) from the application domain’s interface set. Service communication channels implement physical endpoints for WS-*, REST, etc. These channels use transforms to produce message processors and service descriptions from the agent action set. Finally, you might have 1 or more conversation managers that manage specialized state and data formatting capabilities for special purposes like driving user interfaces, BizTalk orchestrations, or Office Integration.

So, in each successive layer of a SOA 2.0 stack, new semantic categories and transformation templates are applied that may use artifacts residing in an adjacent layer to affect behavior. This transform pipelining approach creates a turnkey engine to project application capabilities into a sort of super-API. In the real world, however, these transforms are obviously not as simplistic as the CRUD example above. This is why we now need to begin building a library of reusable semantic categories and their corresponding transformation patterns.

Transformation rules – when described in mathematical terms – can be proven complete, unlike many hand-rolled, imperative programming approaches. In fact, this might be an entry requirement for adding candidate patterns to a future library. I think this approach beats the current entry barrier – prove something’s been done at least three times and voila – it’s a pattern!

Seriously, transformations are deterministic and flexible. They can be manifested as independent engines or in static code generation. They can be domain-specific languages (DSLs). Once implemented into an SOA stack, changes to the application domain can affect the entire solution predictably and automatically. Transformation rules can distinguish breaking changes from compatible evolution. So, the agent can, for example, know when to create a new action-set and when to simply alter descriptions of an existing action-set.

So to summarize, a key area in the next evolution of services architecture is about having the architecture stack aware of certain semantics in the application requirements. The stack can adapt itself to changes in the requirements by executing directed transforms against metadata. And now is the time to start identifying patterns of transformations that link layers of the SOA together intelligently. The mathematical world is well aware that graph transformations and category logic both relate well to computer science. Unfortunately, I majored in music. So, I need to get some help from more mathematically-astute people to see if all this actually matches up. Maybe a new color band is emerging from that goo in the glass column.

Archive