Deploying stateful applications on Kubernetes can pose a lot of complexities. In this demo, we will deploy a Postgres database to AWS Elastic Kubernetes Service(EKS) and configure its persistence on Amazon Elastic Block Store(EBS). We will be using Helm, a package manager, to make this process more efficient.

Pre-requisites

First, ensure that the following utilities are installed and properly configured on your machine.
AWS CLI
EKSCTL
HELM

1. Create an EKS cluster
You can use either the AWS management console or EKSCTL utility to create your Kubernetes cluster, for convenience, we use eksctl.
Create a file “demo-cluster.yaml” and paste the following into it.

# demo-cluster.yaml
# A cluster with two managed nodegroups
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: demo-eks-cluster
  region: your-region
  version: "1.21"

managedNodeGroups:
  - name: dev-ng-1
    instanceType: t3.large
    minSize: 1
    maxSize: 1
    desiredCapacity: 1
    volumeSize: 20
    volumeEncrypted: true
    volumeType: gp3
    ssh:
      allow: true
      publicKeyName: your-keypair-name
    tags:
      Env: Dev
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
      withAddonPolicies:
        autoScaler: true

The file creates a Kubernetes cluster named demo-cluster, with two managed nodegroups. To apply it, run;

eksctl create cluster -f demo-cluster.yaml

After the cluster has finished provisioning, view the nodes with the command;

kubectl get nodes

2. Create an IAM OIDC identity provider

Retrieve your cluster’s OIDC provider ID and store it in a variable.

oidc_id=$(aws eks describe-cluster --name demo-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id
eksctl utils associate-iam-oidc-provider --cluster demo-cluster --approve

3. Configure a Kubernetes service account to assume an IAM role

cat >my-service-account.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ebs-csi-controller-sa
  namespace: kube-system
EOF
kubectl apply -f my-service-account.yaml

b. Set your AWS account ID to an environment variable with the following command.

account_id=$(aws sts get-caller-identity --query "Account" --output text)

c. Set the cluster’s OIDC identity provider to an environment variable with the following command.

oidc_provider=$(aws eks describe-cluster --name demo-cluster --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

d. Set variables for the namespace and name of the service account.

export namespace=kube-system
export service_account=ebs-csi-controller-sa

e. Run the following command on your terminal to create a trust policy file for the IAM role.

cat >aws-ebs-csi-driver-trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "$oidc_provider:aud": "sts.amazonaws.com",
          "$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
        }
      }
    }
  ]
}
EOF

f. Create the role “AmazonEKS_EBS_CSI_DriverRole”, and my-role-description with a description for your role.

aws iam create-role --role-name AmazonEKS_EBS_CSI_DriverRole --assume-role-policy-document file://aws-ebs-csi-driver-trust-policy.json --description "my-role-description"

g. Attach the required AWS-managed policy to the role with the following command.

aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --role-name AmazonEKS_EBS_CSI_DriverRole

h. Annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume.

kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole

4. Adding the Amazon EBS CSI add-on

To improve security and reduce the amount of work, you can manage the Amazon EBS CSI driver as an Amazon EKS add-on. You can use eksctl, the AWS Management Console, or the AWS CLI to add the Amazon EBS CSI add-on to your cluster. To add the Amazon EBS CSI add-on using the eksctl, run the following command. Remember to replace it with your account ID.

eksctl create addon --name aws-ebs-csi-driver --cluster demo-cluster --service-account-role-arn arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole --force

5. Update the worker nodes’ role

Attach the policy “AmazonEBSCSIDriverPolicy” to the two worker node’s roles for the cluster and also the cluster’s ServiceRole.

6. Deploying Postgres database with Helm

Helm is a Kubernetes deployment tool for automating the creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. Kubernetes is a powerful container-orchestration system for application deployment.

– Define storage class

You must define a storage class for your cluster to use and you should define a default storage class for your persistent volume claims.
To create an AWS storage class for your Amazon EKS cluster, create an AWS storage class manifest file for your storage class. The following storage-class.yaml example defines a storageclass named “aws-pg-sc” that uses the Amazon EBS gp2 volume type.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-pg-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4 

Use kubectl to create the storage class from the manifest file.

kubectl create -f storage-class.yaml

Run the following to view the available storageclasses in your cluster,

kubectl get storageclass

– Helm chart for PostgreSQL

In this demo, we will leverage the helm chart for postgresql managed by bitnami. . We will be overwriting some values in values.yaml so that the chart uses the storageclass we provisioned earlier. Create a file “values-postgresdb.yaml” and paste the following into it.

primary:
   persistence:
      storageClass: "aws-pg-sc"
auth: 
   username: postgres 
   password: demo-password
   database: demo_database

– Installing the Chart

To install the chart with the release name pgdb:

helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb --values values-postgresdb.yaml my-repo/postgresql

After the database successfully deploys, check the PV, PVC and pod created respectively with the following commands, which should give similar outputs as the following respectively;

$kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE

pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb   8Gi        RWO            Delete           Bound    default/data-pgdb-postgresql-0   aws-pg-sc               87s
$kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE

data-pgdb-postgresql-0   Bound    pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb   8Gi        RWO            aws-pg-sc      6h44m
$kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE

pgdb-postgresql-0   1/1     Running   0          16m

You can also verify that the persistent storage was provisioned by navigating to the AWS management console >> EC2 >> Elastic Block Store >> Volumes. The screenshot attached is the volume provisioned in my case.

How To Provision a Persistent EBS-backed Storage on Amazon EKS with Helm

7. Cleaning up

To clean up and delete the Kubernetes cluster we created earlier, run the following command;

 eksctl delete cluster -f demo-cluster.yaml

If the above command doesn’t delete the cluster due to the presence of the pod, navigate to cloudformation console and manually delete each cloudformation stack.

Undeleted-stack

And that concludes the demo on provisioning persistent EBS-backed storage on Amazon EKS using Helm. Feel free to comment below with your feedback.
You can also watch the video demonstration on YouTube