The continuing transformation of the IT industry around the externalization of service components constitutes an exercise in abstraction. The transformation assumes that any IT application can be recursively decomposed into constituent services. An application that has been re-architected or engineered this way is known as a composite application.
As with any abstraction exercise, the devil is in the details. When a particular capability is to be replaced with a service, just as good is not sufficient to trigger change; it needs to be much better to make the change justifiable from a business perspective, anywhere from 3X to 10X to overcome the change hysteresis or the incumbent alternative stays.
One challenge is that in many cases there is no tradition in the organization of measuring service components. For instance if a replacement service is said to be more energy efficient and it comes with the metrics to prove it, these numbers are not useful if the contracting organization does not keep energy consumption statistics at the level of granularity of the service offering: if the available historical data is at the power distribution unit (PDU) level measuring consumption at the rack or even row level, this information is of little use if the unit of delivery for a service is a virtual machine (VM) and the provider furnishes energy data on a per VM basis.
The semantic gap between VM power and PDU power could be in principle bridged by aggregating VM power into rack power. However doing so would require significant research and would difficult to do without industry consensus because the actual numbers would be dependent on the measurement method.
One of the most obvious metrics for a service is pricing. For instance one could use the total cost of ownership (TCO) of a storage appliance to derive a cost per byte over the lifetime of the appliance. This number can be compared with the cost of a cloud storage service offering. The cost of the service may be accrued on a monthly basis, where the cost of the appliance comes in a big chunk of petabytes over the 3 to 5-year lifetime of the appliance. This suggests that there are more dimensions in the process of making a decision between doing nothing or breaking up the application into service components and start factoring out these components, SOA style or externalizing the components through cloud service providers. What are these dimensions? Let’s explore a few.
Performance and Quality of Service
Assuming that a prospective service alternative passes the cost test the IT organization will look at performance. It will be of little consolation if a service offering saves money if the quality of service (QoS) deteriorates to the point that complains pile up. The expectation is that service offerings tend to be remote and hence result in higher latency due to distance and lower bandwidth due to network limitations. A cloud bursting solution may be implemented via a VPN link to remote resources offered by a service provider. This link is a potential weakness, inducing a “tromboning” or barbell effect with two large resources connected through a relatively thin tether, resulting in large latencies between entities connected across the tether.
Performance deserves a careful consideration because performance behaviors tend to be highly discontinuous. For instance, if a service offering doubles storage latency, this may trigger transaction timeouts. Because of the transaction retries, the actual latencies experienced by the end users served by the IT organization may not just double, but increase by an order of magnitude. On the other hand, a global company replacing a centralized database location with storage from a provider may actually end up with improved QoS if the provider caches and mirrors the data in the appropriate locations.
Another dimension of performance is scalability. In the industry cloud computing is economically feasible because of specialization: the assumption is that one entity able to fulfill a specific function on behalf of a community of service customers more efficiently than each customer separately through resource pooling and specialized expertise. Therefore the service provider can deliver the function at a lower cost than the in-sourced alternative enough and still make a profit to stay in business. The size of the pooled resources needs to be larger than the largest request expected from any of the customers; otherwise there will be cases where the provider will not be able to honor a request.
Security is a first order concern on par with performance and cost. It relates to preserving the privacy and integrity of the data and governance, risk and compliance (GRC) practices. Security is often cited as a roadblock to cloud adoption in the industry. An approach to addressing this conundrum is to look at the problem as a continuum, not as a black or white issue and to look at capabilities available today. Different application deployment models have different levels of security associated with them. A classification of infrastructure deployment from the most to least secure could be as follows:
1) Corporate assets deployed in corporate infrastructure
2) Private cloud on corporate premises
3) Provider hosted private clouds
4) Public clouds
One solution to improve cloud security outcomes while minimizing cost is to define different types of data and institute a policy for the deployment of the data. One example would be to have company secrets under #1, corporate e-mail stores under #2, CRM data under #3 and product brochures under #4.
IT and Business Process Standardization
This is a big one. One of the purposes of an ESB is to ensure there are commonly enterprise-wide procedures (“patterns” in SOA speak) for functions such as pub-sub and event notification. For a single enterprise a single, company-wide proprietary ESB implementation, either cobbled up in-house or from a single vendor is a workable solution. Extending this notion across multiple companies and providers is much harder. Arguably cloud computing is still an
emerging discipline with the standards to enable these capabilities not yet in place.
Even the simpler problem of moving a workload from one hypervisor environment is currently a nontrivial undertaking. The author had the privilege of serving as an architect for a proof of concept exercise sponsored by T-Systems. The goal of the exercise was to demonstrate the ability of moving virtual machines across hypervisor environments using publicly available conversion tools. This was a straight implementation of the VM Interoperability Usage Model as defined by the Open Data Center Alliance. We used four of the most well-known hypervisor environments. We found roadblocks across
most conversion paths. Moving VMs to public cloud providers posed additional challenges because of the degree of para-virtualization or customization in public vendors’ hypervisor environments.
In addition to the intrinsic functional capabilities implemented by servicelets, composite applications need a number of ancillary capabilities: service customers need to find them. A service registry would allow service provider to publish their offerings and users to discover, assess and bind the
service offerings to their applications.
Another metaservice is data encryption and transformation: data needs to be striped, replicated, compressed, encrypted and replicated to meet target quality criteria. On the business side reliable, non-repudiable mechanisms for billing and cost settlement that work across composite applications. For some customers the ability to do audit trails or even cloud forensics is a must have feature. Unfortunately the state of the art leaves much to be desired, as panelists declared at a recent RSA Conference.
In the next installment we’ll take a look at how Intel® Mashery. In so doing API management addresses some of the challenges for the implementation and deployment of composite applications mentioned above, not just for a single enterprise, but for whole ecosystems comprising both developers and end user communities.