Scaling WordPress In Kubernetes

Cloud Native applications have been designed to be run in microservices architecture where individual components can be scaled separately, data is persisted and sync across replicas and node failures can be easily survived. This can be more tricky with traditional web applications that have not been designed this way, such as WordPress.

Bitnami produces production-ready Docker images and Kubernetes Helm Charts that you can deploy into your Kubernetes Cluster. This blog will make reference to concepts that have been previously covered by our Kubernetes Get Started Guide.

WordPress Is A Stateful App

Although WordPress uses a database backend to persist and maintain user created content, administration changes are persisted to the local drive. These includes the plugins installed, site configuration and CSS/Media used by the web front-end. Multiple replicas of the same WordPress site would expect to have access to a shared drive. Basically, WordPress likes to keep its state close by.

Our Kubernetes Helm Chart for WordPress uses a Deployment to manage and scale the WordPress pods. In order to share this admin-created content across pods, the deployment mounts a volume provisioned by a Persistent Volume Claim(PVC).

When scaling a web app it is important to understand the different types of Access Modes available for persistent storage in Kubernetes: ReadWriteOnce (RWO), ReadOnlyMany (ROX) and ReadWriteMany (RWX).

If you want to administer your WordPress site separately, you could expose a read-only configuration volume to your pods. In this case, ROX would be you best choice since it can be mounted as read-only by many pods across multiple nodes.

If you want to continue using the WordPress admin interface in all its glory, all your pods will need read-write access to a common volume. It is also likely that you would like your pods run in different nodes (for better site availability and scalability). Since a RWO volume can only be mounted in one node at the time, you would really need to use RWX.

Great, Where Do I Get RWX Volume?!

Unfortunately RWX is not a very commonly supported access mode by the current list of volume plugins (see official documentation). So what can you do if you don’t have access to one in your cluster?

Since, WordPress write access does not require a highly performant solution, we are going to share a RWO volume across multiple pods by remapping it as a RWX NFS volume.

Hold on, this is going to be complicated, right? Nope, it is going to be a single command.

There Is A New Chart In Town

A few days back, a new Helm Chart landed on the official Kubernetes repository – Introducing the NFS Server Provisioner.

What this chart does is to deploy all the bits you need to enable dynamically serving NFS persistent volumes (PV) from any other volume type. Most Kubernetes deployment will have a Storage Class that provides RWO volumes. We are going to map a single RWO persistent volume from this existing Storage Class and share it as RWX Storage Class called ‘nfs’.

The following diagram show a summary of the kubernetes objects involved in this process:diagram

Please note that this solution has low fault tolerance, as an outage of the node that has the RWO mounted will affect the whole deployment availability (should not lose data).

As promised, deploying the NFS-Server provisioner is a simple helm command:

$ helm install stable/nfs-server-provisioner --set persistence.enabled=true,persistence.size=10Gi

Deploying WordPress To Work With NFS

Once you have deployed the Helm release, you can check that a new Storage Class called ‘nfs’ is now available in your cluster:

$ kubectl get storageclass

show storageclass results

Also the RXO volume claim has been created. In this case, it is using the standard storage class in minikube (hostpath) and it is bound to the NFS-server statefulset.show rxo pvc

Next we can deploy a WordPress release, using the NFS Storage Class:

$ helm install  --set persistence.accessMode=ReadWriteMany,persistence.storageClass=nfs stable/wordpress  -n scalable

We can see how we now have 3 PVCs in our cluster:

  • a RWO for the MariaDB backend database
  • a RWO for the NFS Server
  • A RWX for the WordPress pods

Inspecting the new PV, we can see that it is served over NFS:nfs pv

Are You Ready To Scale Horizontally?

Almost, the WordPress admin console will require a user session to be served always by the same pod. If you are using an ingress resource (also configurable by the chart) with a Nginx ingress controller, you can define session affinity via annotations. Just add these annotations to your values.yaml file, under ingress.hosts[0].annotations.

Now you just need to scale your WordPress deployment:

$ kubectl scale deploy scalable-wordpress --replicas=3

Happy Highly Available Blogging!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s