Containers are a hot topic these days, and why not? Application containers certainly have all the buzzwords covered: they are cloud-based, serverless, lightweight, highly scalable, and agile….and the list goes on. According to Gartner research, more than 50% of global organizations will be running containerized applications in production by 2020, up from less than 20% today. The benefits of containers for application management are many, and they are maximized only if the containers take full advantage of the cloud. Find out why containers are becoming preferred over virtual machines, and how you can forge a path to the cloud for containerization.
What is Containerization?
But first, what are application containers? Containers encapsulate a lightweight runtime environment for an application and all its components, including the dependencies: libraries, runtime, code, binaries, and configuration files. Containers provide a standardized way to package up all the components and run them across the software development lifecycle (SDLC) on Unix, Linux or Windows operating systems.
What makes containers unique is that they virtualize the operating system (OS) and can run multiple workloads on a single OS instance, and a single server can host multiple containers because each one is small — often around tens of MBs in size. The resulting savings in hardware and maintenance costs are just one of the key benefits.
The Cloud Data Integration Primer now.
Before Containerization: Virtual Machines
Virtual machines (VMs) were borne out of the need to exploit increasing server processing power and capacity in ways that the bare metal applications could not. The VM was the first way to deploy multiple applications on a single platform and it was achieved by running software on top of physical servers to emulate a certain hardware system. A hypervisor is required to run VMs, as each VM comes with its own operating system and hardware support requirements. The diagram below shows the additional layers required in a VM setup that tend to slow down development time and incur more costs than containers.
The Concept of Containerization — An Analogy
When we talk about containerization, a helpful analogy compares application containers to shipping containers. According to “The Box”, by Marc Levinson, the advent of shipping containers in 1956 had a significant impact on the global economy. Before shipping containers, it cost just under $6 dollars to ship one ton of loose cargo. Shipping containers quickly became popular because they could be stacked, took up less real estate, and were more portable and easier to manage. After shipping containers emerged, shipping costs dropped dramatically – going down to 16 cents to ship one ton. That 97% savings was achieved because of the standardization and efficiencies the new shipping containers offered.
Thinking about application containers, what does this tell us? If you are using Talend to develop your applications, you are the manufacturer, and you are delivering it to your operations team, who act in the role of system administrators to deliver value to your end user, and when they act, they're acting in the context of your cloud providers. So, with developers, operators, and cloud providers, containers provide a contract for collaboration and because they're based on standards that are modular, they increase the efficiency with which you can deliver the cargo, which in this case, is your data.
Containerization in the Cloud – Definition, Uses, and Value now.
Containers and Virtual Machines
A virtual machine (VM) virtualizes hardware. A container virtualizes the operating system (OS). Both perform essentially the same function, but containers require much less space and effort. In the world of virtualization:
- A hypervisor is required to virtualize the hardware being used
- The operating system consuming the resources needs to be duplicated, no matter if it was disk, CPU, or memory
Compare that to container requirements:
- Nothing is virtualized, there is just a very thin layer around the engine on top of the OS
- Containers are stacked up directly on the same OS. This eliminates the costs associated with the hypervisor and the guest operating system. You can pack many more containers than you can virtual machines on the same host, saving on hardware costs.
Containerization’s Secret to Scaling Up…and Down
As part of supporting our own IT infrastructure, we have performed various benchmark tests internally at Talend, and discovered that containers start up five times faster than VMs. When we were launching our own VMs it took anywhere from five to 10 minutes to start them up. Five to 10 minutes compared to less than 60 seconds is a significant difference in the world of scale, where you want to have capacity quickly exactly when you need it.
Containers have really helped facilitate the DevOps processes at Talend as well. When we are ready to deploy to production, it’s much easier to move through the environments with containers. It is literally plug and play, and it can be done on-demand. This is a huge cloud-enabled benefit: the ability to release fast and quickly and often also comes with the ability to make mistakes, because you can pull it out just as quickly. This level of agility can only be achieved with containers.
If we think about containers in the cloud specifically, another of the great benefits is elasticity. And elasticity is a spectrum. You can provision at the end level, you can provision at the level of containers, and based on the more rapid response time of containers, you can dynamically scale up or down very quickly and efficiently.
Cloud Integration for Dummies now.
Cloud Containerization – Going Serverless
In many cases today, containers are deployed on VMs. This makes the VM the unit of provisioning, and the container the unit of consumption. If you delegate the responsibility for managing the VM to the cloud provider so you don’t have to do it yourself, it becomes a serverless environment. On AWS for example, you can turn to Fargate directly for a container from the repository running this microservice in the cloud instead of first provisioning a pool of Amazon EC2 server instances. In so doing, the cloud providers are responsible for handling your technology concerns, freeing you up to focus on your core business asset: your data.
To complete this value chain, there needs to be some adjustments to how you architect your apps; you must capture the operational logs to preserve visibility and measurability in the cloud. You could take that on yourself, but wouldn't it be much better if you simply exported the logs to a cloud monitoring and management service like Amazon CloudWatch? Other service capabilities in the cloud are equally important, and if you are running a lot of microservices in containers, how can you orchestrate them? You can use cloud services like simple notification service or simple queuing service to auto-scale your resource pool on-demand in a much more efficient manner. As a result, the unit of deployment would be containers rather than EC2 instances.
Containerization for a Resilient Integration Workflow
Now you know about the many advantages of using containers, and how the cloud can make the benefits more numerous. In the next article about containerization, we will continue this discussion and talk about some decisions you will need to make as part of the container development process. You will learn about instances, and which instances are best for certain scenarios. You will also learn how to build a resilient integration workflow that includes standardizing your end-to-end DevOps. Finally, we will demonstrate how Talend supports application containers by preserving your value chain, so you can realize all the benefits when you deploy your containers in the cloud.