4.1 Software Architecture Paradigm Shift
With the exception of telecommunications systems, video games, mainframe operating systems, or rigorously inspected software (e.g., CMM Level 5), almost every piece of software on the market today is riddled with defects and, at least in theory, doesn't really work. The software only appears to work-until an unexpected combination of inputs sends it crashing down. That is a very hard truth to accept, but experienced architects know it to be the case. In commercial software, nothing is real. To verify this claim, one must only invite a group of noncomputer users to experiment with any system. It won't take long for them to lock up one or more applications and possibly invoke the Blue Screen of Death.
To cope with this uncertain terrain, software architects need to begin thinking about software as inherently unreliable, defect ridden, and likely to fail unexpectedly. In addition, software architects need to confront numerous issues regarding distributed computing that aren't taught in most schools or training courses.
There are many things to learn and unlearn before going to war. Recognizing a key paradigm shift will lead to a deeper understanding of distributed computing and its pervasive consequences.
Traditional System Assumptions
The essence of the paradigm shift revolves around system assumptions. Traditional system assumptions are geared toward nondistributed systems (e.g., departmental data processing systems). Under these assumptions, it is assumed that the system comprises a centrally managed application where the majority of processing is local, the communications are predictable, and the global states are readily observable. Additionally, it is assumed that the hardware/software suite is stable and homogeneous and fails infrequently and absolutely: Either the system is up or the system is down. Traditional system assumptions are the basis for the vast majority of software methodology and software engineering.
Traditional system assumptions are adequate for a world of isolated von Neumann machines (i.e., sequential processors) and dedicated terminals. The traditional assumptions are analogous to Newton's laws of physics in that they are reasonable models of objects that are changing slowly with respect to the speed of light.
Distribution Reverses Assumptions
The von Neumann and Newtonian models are no longer adequate descriptions of today's systems. Systems are becoming much less isolated and increasingly connected through intranets, extranets, and the Internet. Electromagnetic waves move very close to the speed of light in digital communications. With global digital communications, the Internet, and distributed objects, today's systems are operating more in accord with Einstein's relativity model. In large distributed systems, there is no single global state or single notion of time; everything is relative. System state is distributed and accessed indirectly through messages (an object-oriented concept). In addition, services and state may be replicated in multiple locations for availability and efficiency. Chaos theory is also relevant to distributed object systems. In any large, distributed system, partial failures are occurring all the time: network packets are corrupted, servers generate exceptions, processes fail, and operating systems crash. The overall application system must be fault-tolerant to accommodate these commonplace partial failures.
Multiorganizational Systems
Systems integration projects that span multiple departments and organizations are becoming more frequent. Whether created through business mergers, business process reengineering, or business alliances, multiorganizational systems introduce significant architectural challenges, including hardware/software heterogeneity, autonomy, security, and mobility. For example, a set of individually developed systems has its own autonomous control models; integration must address how these models interoperate and cooperate, possibly without changes to the assumptions in either model.
Making the Paradigm Shift
Distributed computing is a complex programming challenge that requires architectural planning in order to be successful. Anyone who attempts to build today's distributed systems with traditional systems assumptions is likely to spend too much of the budget battling the complex, distributed aspects of the system.
The difficulty with implementing distributed systems usually leads to fairly brittle solutions, which do not adapt well to changing business needs and technology evolution.
The following important ideas can help organizations make the transition through this paradigm shift and avoid the consequences of traditional system assumptions.
Proactive Thinking Leads to Architecture.
The challenges of distributed computing are fundamental, and an active, forward-thinking approach is required to anticipate causes and manage outcomes. The core of a proactive IT approach involves architecture. Architecture is technology planning that provides proactive management of technology problems. The standards basis for distributed object architecture is the Reference Model for Open Distributed Processing.
Design and Software Reuse.
Another key aspect of the paradigm shift is avoidance of the classic AntiPattern: "Reinventing the Wheel." In software practice there is continual reinvention of basic solutions and fundamental software capabilities. Discovery of new distributed computing solutions is a difficult research problem that is beyond the scope of most real-world software projects. Design patterns are mechanisms for capturing recurring solutions. Many useful distributed computing solutions have already been documented using patterns. While patterns address design reuse, object-oriented frameworks are a key mechanism for software reuse. To develop distributed systems successfully, effective use of design patterns and frameworks can be crucial.
Tools.
The management of a complex systems architecture requires the support of sophisticated modeling tools. The Unified Modeling Language makes these tools infinitely more useful because it is expected that the majority of readers can understand the object diagrams (for the first time in history). Tools are essential to provide both forward and reverse engineering support for complex systems. Future tools will provide increasing support for architectural modeling, design pattern reuse, and software reuse through OO frameworks.
The software architecture paradigm shift is driven by powerful forces, including the physics of relativity and chaos theory, as well as changing business requirements and relentless technology evolution. Making the shift requires proactive architectural planning, pattern/framework reuse, and proper tools for defining and managing architecture. The potential benefits include development project success, multiorganizational interoperability, adaptability to new business needs, and exploitation of new technologies. The consequences of not making the paradigm shift are well documented; for example, five out of six corporate software projects are unsuccessful. Using architecture to leverage the reversed assumptions of distributed processing can lead to a reversal of misfortunes in software development.
 |