Erik Johnson

Sr. Director, Product Research, Epicor Software

Software Architecture

This blog is not associated with my employer.

Monday, June 01, 2009

The iPhone Post

I use a notebook computer at work, but all my off-hours-computing gets done using an iPhone which, ironically, is paid-for by my employer. The iPhone – like David Ing describes – isn’t just a phone that plays my iTunes (which is what attracted me). Safari, the AppStore, and Exchange compatibility makes the iPhone the most heavily-used thing I own (surpassing the margarita shaker). It’s most annoying and yet most ingenious feature, at least for us ex-Blackberry users, is the lack of a blinking light to indicate new information has arrived.

Finally, I’m weaned from those Pavlovian sneaky peeks to see if that little light is flashing while others – often significant others – are speaking to me. Yet in every dead, in-between-type moment – like waiting for take-out food, waiting at traffic lights, waiting out a design review, and especially waiting for the quarterly employee meeting to end – my hand involuntarily grasps presciousss (you know what I mean) and in the same move my thumb deftly swipes away standby mode. Password protection is such a waste of time. On the Blackberry, I surf email. On the iPhone, I surf the world.

The iPhone is one hell of a head start from a company that I repeatedly declare I’m done giving money to. In my house we have one Mac and one PC. I don’t really compare them because they both annoy me. I like how the PC has much cheaper and open hardware and how I can do parts of my day job at home. I don’t like how I have to do more tech support when the kids use the PC. I like how the Mac makes my family happy. But I hate the fact that the cost to repair an Apple device is about 104% the cost of a new one.

But nothing is going to catch the iPhone for quite some time. Like David mentions, it’s amazing to me how Microsoft (and practically everyone else) completely missed the point about mobile computing. Microsoft has spent multi-millions to make mobile development work for .NET programmers but has no market penetration beyond bar code readers. They have completely ignored improving the browser, falling into the tabbed browsing trap which is pure
unnovation. Mozilla totally gets it – tabbed browsing sucks. And in Microsoft’s obsession to go after Flash, they completely missed the importance of JavaScript running, well, fast.

But the biggest miss of them all? The Cloud. Believing every morsel of some apparent manifest destiny driving the Cloudrush, Microsoft is expanding its server capacity much faster than it can resolve any business model that will generate income. But they haven’t yet figured out that the first commercial apps (that matter) will target mobile users. They haven’t realized that practically all apps in the future will target mobile users. They haven’t realized how VERY few people run apps built on a native Windows Mobile stack. Mobile users are now the swing voters in deciding what and who succeeds in technology.

It didn’t help that the Connected Systems Division, the SQL Server guys, and God knows who else started building Cloud bits without even comparing notes. They threw overlapping Azure bits at a wall to see what would stick. Of course, their customers and partners were standing against that wall at the time. Shoot-outs like that – born from CYA – are a troubling sign of indecision. Someone is either too involved in other things to own the strategy or unwilling to choose winners and losers in order to get to market. Eventually, the conflicting bits were factored out and the SQL Team has come through with better features. But missing from the entire effort is any recognition that mobile use matters.

The Microsoft stack is fighting itself – too much time being spent making it look easy to do a handful complex things while making it harder to do simple things (WCF & Geneva come to mind). It’s time for Microsoft to stop wagging RAD toolkits at IT shops. Put the money into giving first-movers, shops rebooting their efforts for a mobile world, and innovators a chance to be successful. The technology needs are well-known: Microsoft needs a competitive mobile browser, development tools that target many devices (browsers and native), an SSO that work across cellular platforms, and a connectivity model that puts openness ahead of code expedience (yes, I mean REST).

But most importantly, Microsoft needs to exploit the collision of enterprise IT and the consumer world. My ERP application is learning a lot from how people use Twitter and World of Warcraft. Maybe the iPhone’s head start is too great for Microsoft to seriously challenge. But even if Microsoft can’t deliver the device platforms that people crave as much as the iPhone, they ought to at least create tools for building the best apps for any device – connected securely to (what should be) the best cloud-based services.

Monday, March 16, 2009

QCon London 2009

In the preface of Software Language Engineering, Anneke Kleppe writes, “Academic computer science is in danger of slowly becoming an empirical field that studies the advancements being made in the industry”. Web programming as a fundamental platform, open source, and cloud computing movements have wildly democratized software production. Change is happening faster than most academics and vendors can keep up with – let alone attempt to lead.

I bought this book last week at QCon 2009 in London. QCon is special because of its focus on solutions and practices without the usual vendor gravity. Sure, there is some hype lurking in the rafters. But the presenters are genuine problem solvers and conversations are intense, productive, and technical. The conference studies emerging topics in software development that people actually exploit. Those with traction, like functional programming, REST, and agile development gain industry velocity and become a bigger part of following conferences.

QCon disciples epitomize Kleppe’s statements, bypassing the ancient waterfall travelling from computer science through toolkit vendors, application vendors, and then to users. So to me, it was a little ironic that the biggest treat at QCon was actually, um, a computer science academic. Turing Award winner Sir Tony Hoare, who invented both Quicksort (woo-hoo!) and the NULL reference (doh!), gave an outstanding keynote contrasting computer science and engineering. He said that while a scientist pursues the one great story, the engineer pursues many great little stories. Let the scientist seek correctness while the engineer creates dependability. He looks forward to a day that when something goes wrong, the software will be the last suspected cause.

I tend to stay quiet at events like QCon because my software knowledge was self-taught. I worry that someone will throw some formalism at me I will not understand. Also, most attendees are from consultancies or large IT shops, where I work for an enterprise software vendor. My company builds ERP solutions, which are the kind of products that consultants sometimes complain are stuck in the past (while drawing good pay). Issues that resonate for me may not for others.

That fish-out-of-water feeling faded this year, and I have Sir Tony to thank. He said “the engineer can’t afford to be certain of anything … good enough is always good enough”. That said, and with encouragement of others (like @psd), I’m looking to submit an abstract for QCon San Francisco this fall about the Restful Enterprise. We will see how that goes.

Anneke Kleppe’s observation about computer science becoming empirical rather than the agent of progress may someday be true, but Sir Tony raised a point that made me realize, thankfully, that the academic role still has legs. During Q&A at a breakout session, someone asked about the value of proofs to verify programming languages. Sir Tony said that the commercial imperative for proofs was the virus. Viruses reach portions of a program that its normal execution never sees. Testing focuses on cases likely to arise but viruses will eventually exploit a case unlikely to arise. Resolving virus attack vectors will always be an analysis exercise, assisted by formal methods.

I like QCon because the attendees, as a group, are of high caliber, genuinely interested in discussing issues, and know the problems real people face. I also like an event that keeps things simple. Sure, some big vendors sponsor QCon, but they are comfortable staying the background. QCon speakers are the practitioners Kleepe describes as seemingly driving the innovation balance away from academic computer science. The nice thing is that Sir Tony stepped onto his side of the scale and the practitioners simply yielded. Smart people can always recognize the smartest person in the room.

*
PS. Many of the session slides are public (some need a password). You can find the links in the conference schedule grid.



Thursday, February 19, 2009

Let That Be Your Last Battlefield

The Gartner Group’s infamous Technology Hype Cycle is a great study in self-fulfilling prophesies. If you haven’t seen one, they are graphs with a single curvy line that measures hype levels over time. The line shoots up during the exciting times after some technology “trigger” is fired, but then soon plunges despairingly downward indicating the disappointment that whatever was being hyped doesn’t live up to expectations. Next, the line crawls back up (“enlightenment”) and then levels off somewhere midway between the expectation peak and disillusionment canyon. Interestingly, Hype Cycle charts do not gauge time or momentum. They are snapshots.

The 2005
Gartner Hype Cycle for Emerging Technologies (PDF) shows SOA settling into the bottommost part of the cycle – the pit of despair. I think it’s there to stay. SOA was propelled up the hype curve based partly on the additive hype behind web services. And that was because the lynchpin underlying any SOA effort is integration with the constituent applications. Web services may have brought some sorely-needed platform interoperability. But while doing so, web services also perpetuated the problems SOA was meant to mollify.

The real problem is that SOA was never about solution design. It was actually a reaction to the limitations inherent to most enterprise application APIs. It was also partly related to a perennial best-of-breed vs. single-vendor build conundrum: If you buy the best apps from different vendors, they won’t integrate, they’ll all look and act different, and maintenance costs are higher. But buying a single solution locks you into a monolithic dinosaur that (some say) over time hinders business agility. Most enterprises do some of both plus build their own stuff which adds to the problem.

For users, these choices are just as political they are strategic. SOA gets pitched as a roadmap for having it all, which placates stakeholders (a key sales tactic). Big apps, little apps, and online apps happily interact while users blissfully see a single system. SOA is a complex way to hide other complexities, which is no way to solve a problem in the long term. If you worked for a big toolkit vendor with a consulting practice, it was like striking gold.

SOA was invented to cover the basic failure of enterprise applications to provide loosely-coupled access to capabilities. It never represented progress in software architecture. Real change can only come by making individual enterprise applications the best possible citizens in an environment bounded by agreed-to assumptions, like the Internet. But it’s not just about declaring REST beats SOA. REST doesn’t help the situation unless it’s coupled with a solid information strategy – a well thought-out resource model addressing both data and behavior. That point has been overlooked by REST proponents, BTW.

Nevertheless, I’m done with interfaces and typed containers. I’ve thrown it away and see no reason to go back. They do not scale well in large projects and a pain to deal with in integration projects. Their real failure is that specialized interfaces are limited by their authors’ ability to presuppose their use cases. That limits the vectors available to access data and capabilities of the application. It is especially problematic in data-driven solutions, which probably sparked the rise of SOA in the first place.

For data-driven enterprise applications, REST can convey data and behavior with just as much fidelity as any programming model. This isn’t an idle claim – it’s based on several years of work to research how to build an ERP solution suite around REST architecture. Resource policies and URIs can hide data, govern state changes, and provide callers with information schemas. A good programming model tied to these principles can segregate capabilities, processes, and data. In the process it eliminates the trappings of static data container types and procedure-oriented functions that toolkits make easy to create but wind up creating coupling too tight for the Internet (and the Agility) age.

I suppose it reinforces their branding, but Gartner pegs any and all technology to the same shaped line. It’s depressing to think that every new technology has to travel along the same path – especially that one. What’s the point of sticking with an idea through the first two-thirds of the journey? The smart money is to sit out the aggravating part of the cycle – as Gartner well knows. At some point during the enlightenment period, the software industry wakes up and, in scorched earth fashion, bludgeons the world with so much tooling everyone forgets the concept that originally triggered the hype.

I don’t know whether Gartner has REST mapped to a Hype Cycle chart. If they have, then they’ve got it wrong. REST isn’t anything more than what the Internet already has had for many years. The Internet works – the trough of disillusionment (if it existed) happened long ago. Will that starve the vendors and the consultancies of the oxygen that feeds the tooling frenzy? Absolutely not.

Like so many other issues the world faces, applications need to work together and in unique operational situations with far less effort. Applications must presume they are part of a greater machine that controls how and when the application runs and with what data. Whether in the Cloud or not, the successful applications will be those that obviate the need for SOA but work well in mash-ups.

Monday, March 17, 2008

REST URI Spaces and Information Reuse

I’m on a plane returning from the QCon 2008 conference in London. It was a top-notch event and among the great presentations, two things I learned stand out. The first was that I need want to learn Erlang. I spent some time with Erlang inventor Joe Armstrong, and had such good fun that I’ve already downloaded the bits and bought the book. Second, the REST rationale has really gelled and the proponents no longer see a need to argue their case – it’s time to mature the story.

In the REST track, it was said (repeatedly) that WS-* circumvented the “design” of the Web and thus ignored the Web’s innate capabilities (for example,
Jim Webber’s slides, #14). Sure, architectures often encapsulate subsystems for the sake of abstraction, which brings me to my point. Just as WS-* tunnels the Web, component architectures often manage persistence by tunneling a relational database. The query processing power of the database is rarely directly accessible by the solution consumers.

My company sells ERP applications, which are used to run enterprises. Our customers demand commercial RDBMS products because, in their minds, our application maintains their data. That data must be in a store they understand and (feel they) control. These enterprises invariably invent ways, bypassing our code, to productively reuse their data in ways their vendor (us) did not foresee. Conceptually, I don’t see much difference between this kind of reuse and the serendipitous reuse described by Roy Fielding (and summarized nicely
here [pdf] by Steve Vinoski). If their data was stuck in an opaque blob, accessible only through our APIs, we wouldn’t sell very many systems.

But like I mentioned earlier component architectures relegate relational databases to soulless “persistence stores” devoid of independent capabilities. The conventional wisdom for component design says data is private to the component and access only through a public interface. But I think people have tended to militantly privatize data, denying support to anyone daring to connect to the data on their own. That practice is one reason some solutions are opting away from an RDMS stores altogether and going with more OO-friendly stores.


The obvious problem with taking this tenet too literally is that access to data is limited to a preconceived set of interfaces. Reusing data in unforeseen ways is accomplished by sneaking around the application or rebuilding it to accommodate new requirements. Databases are rather smart engines designed under an assumption that data, once created, is then retrievable in countless, unforeseen ways. Nevertheless, architectures are burying perfectly good query processors under layers of abstractions, objects, and interfaces. Being pushed to the bottom of the stack is one reason that physical data models are actually devolving, IMO, but that’s another story.


The public “API” for a RESTful application is its URI address space. You can invent a list of URIs mapped to resources and state sequences all you like. But the reuse potential is limited to whatever your callers can get out of that URI space. Like REST, SQL databases have a uniform interface. But look at the practically unlimited variety of resources you can access. Obviously, a REST URI shouldn’t be a SQL statement and I’m not trying to shoehorn XQuery into a URI. All I’m saying is that a URI space can incorporate parent-child and relational characteristics from a data model – using relational database behavior as a guide. This has been a key aspect (for 8+ years, BTW) in developing URI strategies for our products.

The emerging specs and toolkits, like WADL and WCF, feature URI template constructs. But URI templates have no notion of resource linkages (parent, relational, or otherwise) and that limits their effectiveness. At QCon, there was little consensus that WADL was the right way to describe a REST application. But I think REST description languages resource types are coming and I’d like their creators to at least consider resource linkage features for URI templates. It’s all been done before.

Saturday, January 19, 2008

Got my name in the Financial Times!

Only a few people know I like to hobby with macroeconomics during break-time around the office*. Most of my colleagues had begun the weekly yoga session in the area outside my office. So, the lights were out and the soothing music seeped under my door. I was reading the commentary in that day’s FT – the top op-ed piece essentially said that the US should welcome sovereign wealth funds without looking too intently under the hood.

Well, clearly someone had to say something. So, I channeled a little John Maynard K. and typed up my first-ever letter to an editor. And in the spirit of what can be done to “some of the people some of the time”, the FT printed my letter (requires free registration to read the whole letter**) in last Friday’s edition (at least in the US). I had no idea until I stumbled on it while on the train home from work. Cool!

* Not really.
** You can try this
link until the FT scrolls the entry into the archives.

Monday, October 29, 2007

Poking at ROA

+1 to Tim Ewald’s point that the ROA crowd might be pressing too hard to make PUT do some uncomfortable things. -1 (with all due respect) to Stefan Tilkov’s assertion that “anytime you find yourself adding words like “operation” to your representation, you’ve violated one of the core RESTful HTTP principles, which is that the intent should be communicated using the HTTP verb.” IMO, that’s an unfair litmus test for ROAishness.

Stefan was chiding a specific situation with respect to GData, which I do not know much about. But I do know that situations exist where I want to convey multiple “intentions” in a single physical call. You should be able to update a customer and a supplier in the same message -- to support arbitrary composition of the intent. What's the URI for that? That’s a rhetorical question, because a POSTing location and a schema indicating the message format are all you need.

I’m a big fan of ROA, but I’m worried that ROA fundamentalism will create a quagmire (shared by SOA) where all advice seems to be about what *not* to do. ROA makes it easy to bind the HTTP verb with your intent, but it doesn’t require you to do so. If I define a format for a message that can contain multiple “intentions” and then expose POST endpoint for processing messages, have I broken some Law of ROA? I don’t think so.

Or, like Tim asks, does ROA mean I have to PUT things in an imaginary basket and then PUT an imaginary thing into an imaginary place to make the basket get processed? No. There is no crime in using The Uniform Interface in a way that partners the payload, verb, and URI to dispatching logic or help give you a cleaner programming model.

Sure, it’s best to keep business process protocol stuff out of the data in your payload, which is what I think Stefan was really alluding to. That obviously gives you the ability to reuse a message format by isolating intention to the URI. But resources often have internal processes, like special state transitions, which may need to be manipulated via flags in the payload.

The free market rightly determined that WS-* is often too difficult to use and it probably doesn’t solve the problems you think it will. But if ROA forces people to use more effort or do unnatural acts to get a day’s work done, ROA will be out on its ass as well. In both cases, the good goes down with the bad.

Friday, September 28, 2007

Software Architecture as Principles

So, I had The Talk with my 10-year old son today. He was a confused and even became a bit emotional as the gravity of the facts emerged. The discussion was, of course, about copyright law.

He had burned a CD containing tracks from his iTunes library to give to a friend as a birthday gift. So, we talked about how this was in fact stealing and that we should just go buy a new copy the music outright, etc. Here’s a sampling of from his questions during the discussion:

Q: What if my friend has some songs on his iTunes, but his CD burner is broken. I have the same songs, so can I burn them from my computer and give him the CD? A: Um, maybe that’s OK. I don’t know.

Q: How come it’s OK to lend out PlayStation disks? A: Easy (whew!), because while they borrow it, you are not using it (note to self: is that really legal?).

Q: So, can I burn a CD, give it to someone, and just not listen to the songs myself until I get the CD back? A: Um, you’re late for school – off you go.

Like software architecture (and legal systems), copyright is a principles-based rather than a rules-based concept because it’s impossible to precisely spell out, up-front, all actions that constitute non-compliance. The principle says you can’t disseminate the work of others without permission. Laws assign penalties to broad forms of violations like producing counterfeit software. Court precedents over time develop the more specific lists of what’s OK (backing up your iTunes) and not OK (giving out copies).

Software architecture works the same way. Software architectures are collections of principles that define guidance and boundaries for producing software systems. Lower-level guidance comes in the form of standards and practices for developing a software system that conforms to the principles in the architecture.

Principles-based regulation means that laws are enforced by examining the intent of people rather than reconciling deeds against a roster of specifically banned actions. The Enron scandal, it’s said, grew unnoticed for years because Enron parsed the regulations in bad faith and created scenarios that somehow convinced Author Anderson to sign off on audits. Enron and Arthur Anderson both knew that accounting principles (and the law) were being violated but felt relatively safe because any by-rote analysis of their accounts against the rules (as written) would come up clean.

UK regulators like to say they have no Enron situations because UK accounting standards are principles-based from the outset. I don’t know how true that is, but the infamous U.S. Sarbanes-Oxley Act of 2002 directs the U.S. Securities & Exchange Commission to examine the feasibility of moving the U.S. to a principles-based accounting standards system.

Getting (at last) back to software architecture, I work for an independent software vendor (ISV) in the enterprise resource planning (ERP) market. One of the characteristics of an ISV is that we are highly principles-based and generally don’t rely on thick volumes of specific details telling our engineers exactly what to do. Sure, we have standards around code and UI styles. But developers are taught the principles within the product architectures and the intent behind the functionality. That in turn helps prevent issues like “the spec didn’t specifically say the shipment quantity had to be a positive number”.

As we expanded product development overseas, we didn’t rely on outsourcing to contractors. We tried it, but it was too hard to convey the architectural principles and the business intent to a transient staff 13.5 time zones away. Without the principles as context, we had to type reams more specs and it the results consistently had to be reworked. We wondered whether our processes were just plain sloppy. But that wasn’t the case. Our development model just didn’t fit an environment where people came only with bespoken capabilities and never developed any lasting interest in our objectives.

Instead, we opened development offices staffed by our own full-time employees. That meant we could convey our architectural principles, development standards, and train the teams up on business functions like cost accounting and manufacturing. Permanent employees doing the work, no matter where in the world, turned out to be cheaper than just outsourcing to a contractor. More importantly, we realized much better agility.

The problem with developing long lists of rules is that they are expensive to maintain and easy to evade when the pressure is on. I would rather state a principle like “creating new records for purpose X MUST minimize the DB batch request count and NOT hold locks on Y, under typical loads, to significantly affect other processes” and then test for compliance rather than using, say, code reviews and a checklist to attempt to spot things that potentially violate the principles.
For us – a 400-person development organization – agility means efficiently releasing products year-after-year that incorporate rapidly changing markets. Believe it or not, technology shifts aren’t nearly as acute to us as market shifts from compliance and globalization. I never need to find 150 Ruby programmers on a moment’s notice. I need 150 people to understand how a chart of accounts works and how to make one that can be legally used worldwide.

So, while we don’t do many scrums in our product development cycle, we also don’t do waterfall management. The middle ground works because our jobs revolve around incremental evolution of a few systems. It’s an easy place for principles-based management to work. Software architecture perfection to me is nothing but a short list of general capabilities and a long list of non-functional requirements. Those are the principles. Standards and practices do the rest.

My son says he likes knowing how music is copyrighted. The thought he might be letting down people who really own the recordings actually horrified him (much more than me, anyway). By understanding the principles, he can (hopefully) figure out for himself what scenarios violate the law. Now, I guess we’ll have to do the “fair use” talk pretty soon, though. What other talks am I forgetting about?

Archive