Practical Interoperability

Hello everybody, my name is Dino Chiesa, and I work for Microsoft. I share an interest in what Ted Neward called, “software that works”. And yea, verily, I say unto you: interoperability with existing systems is required for enterprise software that “works”.

But because it has got to work, that means it must be practical. Asking the question that Ted proposed - What tools do we have available to make our systems work together? often might be translated not to “what new tools are out there that I can buy?” but rather “what is running in the shop today? What do we have skills with today? What are we comfortable running today, operationally? What can we afford to use, given our existing resources, financial and otherwise?” Often the answer that comes back to these questions is a pile of seemingly unrelated stuff, which at first does not look like it fits the bill. Like the crew of Apollo 13, the challenge to us is to take that pile of duct tape, gauze pads, and 3-in-1 oil (you and I both know that every enterprise that has been around since before June has a mishmash of systems) - you have to take that pile of stuff and connect systems together, practically.

This was ingrained in my DNA early in my career, while working for a startup software company called Transarc was feisty, but tiny. Along with filesystems products called AFS and DFS, we built DCE and Encina, which we called “enterprise middleware”. This was in the pre-Java, pre-Web days, when enterprise systems were built in C and C++, in COBOL, in 4GL’s, in VB. This middleware did nothing except connect other stuff together. I worked as a consultant and a systems architect there, and every new customer situation was an exercise in practical interoperability. If we didn’t get a customer’s heterogeneous systems interconnected, we didn’t make the sale. Transarc had been initially funded partially by IBM, and in 1994, IBM acquired the company and rolled its products into the IBM software portfolio.

The lessons I learned in those days stay with me: Most companies of sufficient size have a lot of disparate information technology, and for good reasons. New software systems or subsystems have to fit into the existing environment. Nobody can afford to start over, from zero, from scratch. Good architects follow a sort-of feng shui approach in systems design, considering the existing environment, the purpose of the new system, the needs of the users, the skills of the existing operational staff, weighing each in proportion to come up with a harmonious and practical design, in other words, software that works.

Jack Vaughan mentioned Ted’s view that many practical interop approaches are often overlooked. Of course Ted is completely right. In many cases, people have said to me, look I have this existing system running in Java, and you’re proposing a new system built in .NET; how will they interconnect? And the answers are often so obvious that they are overlooked. Does the Java-based system rely on an existing operational data store? Maybe it is Oracle on Solaris or Linux, maybe it is DB2 on AIX. That managed resource – already running and reliable and secure in the enterprise – can play the role of interconnect bridge between two disparate systems. Or maybe the existing Java system was designed to expose a custom sockets-based protocol. .NET applications also have a sockets API, and it’s pretty straightforward to implement sockets clients or servers in .NET. Or maybe the Java system uses Java serialization; did you know .NET apps can use Java serialization to save object state, too?Or maybe the enterprise has already confronted the problem of interconnecting disparate systems and has some proven approaches : IBM’s WebSphere MQ (nee MQSeries) is nothing if not an interconnect bridge. Java applications of course have a couple of options for connecting to MQSeries: the JMS provider for MQ, or the Java API for MQ. This may be surprising, but .NET applications have similar options, all supported by IBM. So if you are currently using MQSeries to connect a Java application to a mainframe app, you might consider also connecting your Java apps to your .NET apps via that same conduit already buried beneath the streets.

None of this should take away from Web services, which has gone a long way to democratizing interoperability, in the same way that Flash has democratized animation. These things, if not easy, are now very much more approachable than they were before. But using Web services - SOAP, WSDL, WS-security and the rest – is not always practical. To use Ted’s word, it doesn’t always “work” in a particular situation, for a variety of reasons. But all of these are practical “engineering” approaches. Sharing a database, implementing a sockets protocol, connecting to a message queue, even Web services – these are all just pipes and conduit, and an arrangement of pipes and conduit is not an architecture. We won’t confuse connecting two systems together with the creation of an enterprise service-oriented architecture. The former is practical and tactical; the latter is strategic, and can evolve only after gaining some local, contextual experience with the former.

When interconnecting systems, I think a fundamentalist approach - about SOAP, or XML, or Web services, or MQSeries - can be very impractical. It can impede progress, cost lots of money, or result in complex and unnecessary re-work. At the same time, a mish-mash of spaghetti interconnects is obviously not a cost-effective approach either – it’s not an architecture you can live with long-term. The goal should be a blend of practical and the strategic, to get things done quickly enough, and with a minimum of disruption to the business, while also allowing an architectural philosophy to emerge, to deliver better economics and scale over time.

I’m looking forward to the Interoperability Blog here, and all of the discussion to come.