OSGI Service Containers

Tags: osgi, esb, soa, cloud,

The first post in this series provided a look at the definition of a Container.  The second post in the series explored how Platforms leverage Containers to deliver SOA design patterns at internet scale in the Cloud.  This post presents a simplified example of applying Container architecture for extensible business infrastructure.  It then addresses when and where to use the power of micro-service containers like OSGI.

Use Case

Consider a B2B company seeking to add a new trading partner.  The B2B partner may wish to subscribe to a data feed, so the partner will need to adapt its internal API’s to the trading network’s published API. 

Rather than an elaborate IT project, this should be a simple Cloud self-service on-demand scenario.  This not only increases agility, it maximizes the market for the B2B company.  And it ensures business scalability since B2B IT staff will not be on the information supply chain critical path.

The Partner will probably want extensions to the base API so the B2B platform needs to observe the open-closed principle.  It needs to be closed from modification while being open to extension.  Extensions could be additional validation and business rules or additional schema extensions.  Schema extensions in particular will impact multiple workflow stages.  In addition, transformations for message schemas might be required along with data level filtering for fine grained access control.  In order to realize the Self-Service On-Demand level expected from Cloud providers, the Platform must allow this type of mediation to be dynamically provisioned without rebuilding the application.


Figure 3Dynamic Subscription Mediation


In the diagram above, the partner submits a message to subscribe to a data feed (1) via say a REST web service.  The subscription message is received by the RouteBuilder which dynamically instantiates a new route (2) that consumes from a JMS topic.  The route filters (3) messages based on the access privileges, provides custom subscription mediation logic (4), and then sends the message using WS-RM (5).

Where should this mediation logic be hosted?  Creating a service for each partner is not too difficult.  But as the number of services in a composite service increases the overhead of inter-process communication (IPC) becomes a problem.  As an extreme case, consider what the performance impact would be if the subscribe, filter, and custom mediation logic each required a separate service invocation. 

In many cases modularity and extensibility are even more important than performance.  When partners extend the API the impact may not be easily isolated to a single stage in the processing.  In such cases the extension points need to be decoupled from the core business flow.

Likewise, when the core service evolves we need to ensure consistent implementation across different B2B partners.  Regardless of variation, some requirements remain common.  We want to be sure that these requirements are implemented consistently.  A copy-paste approach will not be manageable.

Finally, using external processes to implement variation may undermine efficient resource pooling.  Each partner ends up with its own unique set of endpoints and supporting applications.  In the diagram above, mediation logic belongs to a pool of routes running in the same process to improve efficiency.

So we want granular composability for managed variation as well as modularity for extensibility of business logic.  This is in tension with IPC overhead and resource pooling.  

Sample Architecture

This post is focused on the role of the Service Container in resolving the design forces.  It is used in the context of the Application Container and ESB Containers shown in the sample architecture below.


Figure 1SOA Whiteboard Logical Architecture


The Application Container hosts the actual business services. The business service is a plain old Java object (POJO).  It does not know or care about integration logic or inter-process communication. That is addressed by the Service Container. The Service Container runs in the Application Container process. The exposed service is called a Basic Service.

The Service container also runs in the ESB Container.  The ESB container provides additional integration including security, exactly-once messaging, monitoring and control, transformation, etc.  It provides a proxy of the Basic Service with the same business API but different non-functional characteristics.

Service Container

The Service Container is a logical concept that is language dependent.  Since the Service Container runs inside both the ESB and the Application Container it has to be compatible with the languages supported by the ESB and the Application Container.  It may well have multiple implementations since there may be multiple Application Containers used by the enterprise.

For purposes of discussion we will focus on Java.  We can think of Tomcat as a typical Application Container and Apache Karaf as the ESB container.  The Service Container depends on a Dependency Injection framework.  We might use Spring for dependency injection in Tomcat; in Karaf we might choose Blueprint.  The Service Container itself might be implemented in Apache Camel.  Camel works with both Spring and Blueprint.  The actual service implementation is a plain old Java object (POJO).


Figure 2Containerized Services


The Service Container is non-invasive in the sense that it has a purely declarative API based on xml and annotations.  The service developer does not need to make any procedural calls.  Adoption of the Service Container by organizations is supported by an SDK that provides a cookbook for using it.  But it should be very simple.  Hence no tooling is required.  The SDK should address Continuous Integration (CI) and DevOps use cases for every stage of development.  As such the Service Container can encapsulate any lower level complexity introduced by other containers.

The service container adds functionality beyond the basic dependency injection framework to address endpoint encapsulation, mediation and routing, and payload marshalling.  Using the Service Container provides a flexible contract between the integration team and the service provider which allows performance optimization while maintaining logical separation of concerns.

But in some cases this requires the platform to be able to deploy the new jars dynamically at runtime to an existing, running container.  Indeed, there may be many Containers that will need to host the new extension points or adaptor services.  All such concerns should be transparent to the service provider.

This could be implemented by the service provider team, but the same mediation will be used by many service providers.  So it is preferable to delegate this functionality to the Platform.  This has the added benefit that service providers can focus on business logic rather than creating and managing efficient resource pools that deliver reliable, secure throughput.  Business logic and IT logic are often orthogonal skill sets.  So separation of concerns also leads to improved efficiency.

Having this dealt with by the Platform is good, but it begs the question, how are custom mediation jars resolved and how are conflicts with custom logic from other partners managed?   

Micro Service Containers

There is a key difference in the selection of the Dependency Injection framework used in the example architecture.  The Application Container uses Apache Tomcat and the ESB Container uses Apache Karaf.  Apache Karaf supports Blueprint for dependency injection, but it also supports OSGI micro-services whereas traditional Spring running in Tomcat does not. 

Flexible deployment of business logic can be achieved with Dependency Injection frameworks like Spring, but two problems arise.  The first is dependency management and classpath conflicts.  The second is managing the dynamic modules.

OSGI is a mature specification that manages dependencies at the package level.  What this means for the enterprise is that we can manage multiple versions of a module within the same runtime.  In turn, this means we can dynamically deploy new service modules to runtime containers without having to worry about conflicting libraries.  The concept is to achieve the same pluggable ease-of-use for enterprise services that you get with your smart phone’s App Store.

In addition to dependency management, the OSGI specification provides a micro-service architecture.  Micro-service refers to the fact that we are only talking about services and consumers within the same JVM.  Micro-services go beyond dependency injection to provide a framework for dynamic services that can come and go during the course of execution.  This supports elasticity of services in the cloud.

OSGI is the same technology used by Eclipse plugins.  So it is very mature and stable. Moreover, as an open standard it is appropriate for use in the enterprise. But there is some additional complexity with OSGI. OSGI complexity is merited when used to host dynamic modules which need to be composed in-process to encapsulate variation or re-use.  This is the case with the ESB Containers.  But it is not always the case for Basic Services running in the Application Containers.

For example, consider a simple transformation service.  It is visually designed and published as a web service.  B2B partners can use the transformation service, and if the additional latency does not impact their SLA, then the additional complexity of OSGI is not merited. 

As a general rule, mediation logic in ESB Containers should use OSGI to provide for flexible deployment of mediation modules.  Mediation is more likely to vary and it varies over a broader set of stakeholders.  These stakeholders may require diverse and potentially overlapping libraries.  Since they are all on their own lifecycle, it requires the dependency management capability of OSGI.  Moreover, mediation services are more likely to be highly dynamic.

In contrast, Basic Service logic can run in the Application Container and is usually delivered by a single organization along with other related services as part of a single Application deployment lifecycle.  Unlike the mediation use case, the service provider team has control and can resolve any library dependency issues during development.  As such it can be run in a lightweight container but it does not necessarily need OSGI.

In summary, Apache Camel provides a Service Container on top of the dependency injection framework, so it can run in Spring or OSGI.  Blueprint is the dependency injection framework for OSGI. OSGI should be considered for composite services and mediation and routing to provide the flexibility and extensibility needed for self-service on-demand, elastic SaaS. 

The next post in the series will explore the reference architecture in greater detail regarding dynamic provisioning and data driven mediation and routing.

Related Resources

With Talend, Speed Up Your Big Data Integration Projects

Products Mentioned

Talend Big Data


Leave a comment

Add new comment

More information?
More information?