![]() |
![]() ![]() |
5.3 Systems IntegrationThe discussion of architectural issues related to client server systems integration can be extended by covering a number of additional areas from which many important questions arise. One of the key skills taught in this drill school is the ability to handle tough questions about the architecture being used. One may have detected in some of the previous remarks an attitude of skepticism, which is appropriate for a mature understanding of technology capabilities and how they apply to system development. Object-oriented architects are responsible for developing the technology plans that manage these underlying technologies in a way that supports the full-system life cycle, which may range up to 15 years for systems in the public sector. The key concepts for technology management can be used to make predictions. For example, technologies in today's configurations will be evolving into new technologies that may make many of today's current interfaces and infrastructure assumptions obsolete. One approach for mitigating this inevitable commercial technology change is to define application software interfaces that the architect controls and maintains to isolate application technologies, which are subject to rapid innovations, from the majority of commercial infrastructures. These concepts and the details of how to implement them are covered in significantly more detail in some of the authors' writings; please refer to the bibliography.
Taking a somewhat cynical view of open systems technologies, one can conclude that the developers of standards in both formal and consortium organizations represent the interest of technology suppliers. There are significant differences in quality among the kinds of specifications that are created and utilized for the general information technology market, comprising the vast majority of object technology specifications and the specifications used in particular mission-critical markets such as telecommunications. For general information technology specifications, there are many cases where the standards do not support testing. In fact, only about 5% or 6% of formal standards have test suites that are readily available. The majority of testable standards are compilers such as FORTRAN and PASCAL compilers. The CORBA technology market has taken a direction to resolve this issue, at least in terms of the base specifications for CORBA technologies. Significant additional work needs to occur to enable general information technology standards to truly meet the needs of object-oriented architects. What about the Internet? The integration of Internet technologies has a high priority in many organizations. The use of intranets and extranets is becoming a mission-critical capability for large and medium-sized enterprises. There has been substantial research and development in this domain. Figure 5.1 shows some of the kinds of interfaces that have been created to support the integration of object technologies to the Internet. Commercially supplied products that tie CORBA technologies directly to the Internet, such as HTTP, are readily available. The implementation of ORB technologies in an Internet-ready fashion has occurred-for example, with the implementation of Java-language-based ORBs, which are integrated with browser environments. The use of object-oriented middleware is an important and revolutionary step in the evolution of the Internet. Object-oriented middleware represents the ability to create new types of services rapidly and to connect to new types of servers dynamically. These capabilities go well beyond what is currently feasible with technologies like HTTP and the Java remote method invocation, which is a language-specific distributed computing capability. Figure 5.1. Integration of Multiple Technology BasesFigure 5.2 addresses the question of integration of Microsoft technologies with other object-oriented open systems capabilities. Based upon industry-adopted standards, it is now possible to integrate shrink-wrapped applications into distributed object environments supporting both CORBA and COM+. The definition of application architectures can implement this capability in several ways. One approach is to adopt the shrink-wrapped defined interfaces into the application software architecture. In this way the application's subsystems become directly dependent upon proprietary control interfaces, which may become obsolete at the vendor's discretion. The alternative approach is to apply object wrappers to profile the complexity of the shrink-wrapped interfaces and isolate the proprietary interfaces from the majority of the application subsystem interactions. The same level of interoperability can be achieved with either approach, but the architectural benefits of isolation can prove to be significant. Figure 5.2. Systems Integration with Object WrappingWhat about security? Computer security is a challenging requirement that is becoming a necessity because of the increasing integration and distribution of systems, including intranet and the Internet. One reason why security is so challenging is that it has frequently been supplied to the market as a niche market or nonstandard capability. For example, the COM+ technology and its ActiveX counterparts do not have a security capability. When one downloads an ActiveX component on the Internet, that component has access to virtually any resource in the operating-system environment, including data on the disk and system resources that could be used for destructive purposes. The implication is that it is not wise for anyone to be using ActiveX and COM+ in Internet-based transactions and information retrieval. The object management group has addressed this issue because of end-user questions about how this capability can be supplied. The group adopted the CORBA security service, which defines a standard mechanism for how multiple vendors can provide security capabilities in their various infrastructure implementations. Computer security has been implemented in selected environments. An understanding of the CORBA security service and how to apply it will be important in the future to enable organizations to satisfy this critical requirement. What about performance? Object-oriented technology has suffered criticism with respect to performance. Because object technology is providing more dynamic capability, certain overheads are consequential. In the case of OMG and CORBA specifications, it is fair to say that the CORBA architecture itself has no particular performance consequences because it is simply a specification of interface boundaries and not the underlying infrastructure. In practice, CORBA implementations have similar underlying behaviors with a few exceptions. In general, CORBA implementations can be thought of as managing a lower-level protocol stack, which in many cases is a socket-level or Transmission Control Protocol (TCP)/Internet Protocol (IP) layer. Because the CORBA mechanisms provide a higher level of abstraction, which simplifies programming when an initial invocation occurs, the ORB infrastructure needs to establish communications intelligently between the client program and the server program. For the initial invocation, certainly additional overhead and handshaking are required to perform this purpose. This handshaking would have to be programmed manually by the application developer without this infrastructure. After the ORB establishes the lower-level communication link, the ORB can then pass messages efficiently through the lower-level layer. In benchmarks of ORB technologies, some researchers have found that the CORBA technologies are actually faster in some applications than comparable programs written using remote procedure calls. Part of the reason is that all the middleware infrastructures are evolving and becoming more efficient as technology innovation progresses. On the second and subsequent invocations in an ORB environment, the performance is comparable to, and in some cases faster than, remote procedure calls. The primary performance distinction between ORB invocations and custom programming to the socket layer is involved in what is called the marshaling algorithms. The marshaling algorithms are responsible for taking application data, which is passed as parameters in an ORB invocation, and flattening it into a stream of bytes that can be sent through a network by lower level protocols. If a machine generates the marshaling algorithms with custom marshaling, it cannot be quite as effective as a programmer who knows how to tailor the marshaling for a specific application. Because of the increasing speed of processors, the performance of marshaling algorithms is a fairly minuscule consideration overall compared to other performance factors such as the actual network communication overhead. Proper distributed object infrastructures provide additional options for managing performance. Because these infrastructures have the access transparency property, it is possible to substitute alternative protocol stacks under the programming interfaces that are generated. After the application developer understands and stabilizes the interfaces required, it is then possible to program alternative protocol stacks to provide various qualities of service. This approach is conformant with best practices regarding benchmarking and perform-ance optimization. The appropriate approach is first to determine a clean architecture for the application interaction, next to determine the performance hot spots in the application, and then to compromise the architecture as appropriate in order to perform optimizations. Within a single object-oriented program, compromises to the architecture are one of the few options available. In a distributed-object architecture, because the actual communication mechanisms are transparent through access transparency, it is possible to optimize the underlying communications without direct compromises to the application software structure. In this sense, the use of distributed-object computing has some distinct advantages, in terms of performance optimization, that are not available under normal programming circumstances. What about reliability? Reliability is a very important requirement when multiple organizations are involved in various kinds of transactions. It is not reasonable to lose money during electronic funds transfers or lose critical orders during mission-critical interaction. The good news is that distributed-object infrastructures, because of their increasing level of abstraction from the network, do provide some inherent benefits in the area of reliability. Both COM+ and CORBA support automatic activation of remote services. CORBA provides this in a completely transparent manner called persistence transparency, whereas COM+ requires the allocation of an interface pointer, which is an explicitly programmed operation that also manages the activation of the services, once that operation is completed. If a program providing CORBA services fails, CORBA implementations are obligated to attempt to restart the application automatically. In a COM+ environment, one would have to allocate a new interface reference and reinitiate communications. An important capability for ensuring reliability is the use of transaction monitors. The object management group has standardized the interfaces for transaction monitors through the object transaction service. This interface is available commercially through multiple suppliers today. Transaction monitors support the so-called acid properties: durability, isolation, and consistency. Transaction monitors provide these properties independent of the distribution of the application software. Use of middleware technologies with transaction monitors provides a reasonably reliable level of communications for many mission-critical applications. Other niche-market capabilities that go beyond this level can be discovered through cursory searches of the Internet. In conclusion, what is needed from commercial technology to satisfy application requirements is quality support for user capabilities. This includes quality specifications that meet the user's needs and products that meet the specifications. New kinds of testing and inspection processes are needed to ensure that these capabilities are supported. These new processes must be able to keep pace with the rapid technology innovation occurring in consortium and proprietary vendors today. The end-users need to play a larger role in driving the open systems processes in order to realize these benefits. In terms of application software development, it is necessary to have on each development team one or more object-oriented architects who understand these issues and are able to structure the application to take advantage of the commercial capabilities and mitigate the risks of commercial innovations that may result in maintenance cost. The use of application profiles at the system profile level for families of systems and the functional profile level for the mains should be considered when application systems are constructed. It is also important for software managers to be cognizant of these issues and to support their staffs in the judicious design, architecture, and implementation of new information systems. ![]() |
![]() |
![]() ![]() |