Kubernetes provides the ability to easily deploy and run containerized applications in cloud, on-premise, or hybrid environments. Kubernetes has gained a lot of attention recently and it has become a platform for innovation in containerized applications. One technology which has probably helped Kubernetes grow a lot is Helm, which provides the means to package, install, and manage Kubernetes applications.
At Talend, we are using Kubernetes and Helm for our cloud applications. In this post, we will show how we have used Kubernetes resources and a Helm chart to address a specific deployment challenge.
The specific use case is a Kubernetes application that needs to connect to a PostgreSQL database. First, we create a Helm chart to manage the deployment of the application. We don’t yet know how to provision PostgreSQL and how to manage the connection from the application to PostgreSQL.
Although the answers might seem trivial at first, it becomes more complicated when we consider different deployment strategies for PostgreSQL. Therefore, let’s start by looking at several possible PostgreSQL deployment scenarios, which we have tried over the course of our journey with Kubernetes and Helm.
Embedded Deployment Scenario
In this scenario, PostgreSQL is deployed alongside the application inside the Kubernetes cluster, as shown in the following diagram.
While it might not be the perfect scenario for a production system, it provides an easy and flexible way to get a PostgreSQL up and running in a very short time.
The official Kubernetes Helm charts repository provides a PostgreSQL chart which installs and configures a PostgreSQL database inside the cluster. The database name, database user name, and database password can be provided in the values.yaml file or as input parameters at install time. The chart stores the database password in a Kubernetes secret which is then used by the pod that hosts the PostgreSQL container and by the applications which need to connect to the database.
OSBA Deployment Scenario
OSBA (Open Service Broker API) enables service providers to deliver services to applications running in a cloud-native platform, such as Kubernetes. The idea is to provision resources that are managed by a cloud provider using Kubernetes manifests.
In this scenario, we are using the Kubernetes service catalog to connect to a Microsoft Azure service broker and provision a PostgreSQL database.
Azure Catalog service proposes 3 different PostgreSQL configurations:
- Provision only the cluster
- Provision the cluster and the DB
- Provision a new DB on an existing cluster
We have chosen to provision both the cluster and the DB to have a setup similar to the embedded model.
Once the cluster/database has been provisioned, the service broker creates a secret inside the Kubernetes cluster which contains all parameters required to access the PostgreSQL database, such as: host, port, database name, user, password, etc. Like in the embedded scenario, this secret can be used by the applications which need to connect to the database.
It is important to notice that the created secret is highly dependent on the way the service broker provider decides how it is going to be formatted. The keys of the secret may vary from one cloud provider to another.
Learn more about the Open Service Broker for Azure and PostgreSQL.
External Deployment Scenario
This scenario implies that a PostgreSQL database is provisioned and managed outside the Kubernetes cluster. This could be a managed service in a cloud environment (i.e. Amazon RDS) or a self-managed PostgreSQL cluster in a cloud or on-premise environment.
Kubernetes provides a special service called “service without selector” to enable the communication from resources inside the cluster to resource outside the cluster. In this scenario, we are using a service without selector to connect from our application deployed inside the cluster to an external PostgreSQL database.
The database host and access credentials are stored in a Kubernetes secret which can be accessed by our application inside the cluster. In this case the secret attributes are freely chosen to match our service’s required parameters.
When we look at these three deployment scenarios, we see that in each case the application that uses PostgreSQL needs to be aware of the different secrets with their own names and content. The embedded scenario uses values and a secret provided by the official PostgreSQL Helm chart, the OSBA scenario uses a secret with vendor-specific attribute names, and in the external scenario we are free to define the secret name and its content.
This is fine if we only need to work with one deployment scenario. However, this might not always be the case. For instance, in the development phase we might want to start with an embedded deployment, but later in production, we may need to use a managed service from a cloud provider – therefore needing an OSBA deployment. We can see that there is a need for abstraction here, so that the application can access the database in the same way, independent of how and where the database is deployed.
The solution we have chosen consists of a generic secret that provides the abstraction layer required by an application to connect to a database without having any knowledge of its whereabouts. The advantage of a secret, apart from being the resource to handle sensitive data, is that it also provides a way to synchronize the launch of pods. If you have a pod depending on a secret file or some environments variables depending on the secret, the pod will not even start before the secret is available.
Below is an example of the generic secret we create for accessing the PostgreSQL cluster and database.
Secret value description
the name of the DB
host name of the cluster, can be an IP or a K8s service name
cluster port, usually 5432
To create this generic secret for each of the deployment scenarios above, we use different mechanisms as described below.
The generic secret is created by a Kubernetes job from the values.yaml file and the PostgreSQL secret is created by the official PostgreSQL chart. We use the environment variables of the job to wait for the embedded PostgreSQL secret to be created. You can find an example here.
The OSBA provisioning process involves two Kubernetes resources, a service instance, and a service binding. The service binding describes the name of the secret to be created after the successful provisioning. So, just like in the embedded scenario, we are using a Kubernetes job that waits for the OSBA secret to be created and then creates our generic secret out of it. You can find an example here.
This is the easiest scenario because the credentials data comes from outside, and therefore can be provided during the helm install process. So, a simple secret template is enough to create the generic secret. You can find an example here.
A New Level of Managing Connections
Kubernetes has taken the orchestration of containerized applications to a different level and is helping software vendors reduce the gap between development, QA, and production environments. At the same time, the options for 3rd party resources/services have increased significantly, and software vendors are faced with the challenge to build their applications in a way that provides enough flexibility to switch between a locally managed service and a cloud-based service.
In our example, we have shown how you can leverage Kubernetes and Helm to manage the connection from a Kubernetes application to a PostgreSQL database. This could be deployed either alongside the application inside the cluster, on-demand in a cloud environment, or pre-deployed outside the cluster. We have used a Helm chart to create a single generic secret which provides an abstraction layer between the application and the database. In this way, the application should never know where the database is deployed or change the way it connects to the database.
You may find the related helm charts that where created using this approach here: https://github.com/sgandon/helm-postgresql-multi/tree/master/tpostgresql
This post was inspired by a workshop that Talend did with Microsoft. We’d like to thank Gil Isaacs for organizing this workshop and Julien Corioland for his great knowledge and skills around Kubernetes and Azure.