Are you looking for how to resize a kubernetes statefulset’s volumes bestinau then this article is for you with all the necessary details.
In your cluster, stateful applications are deployed using Kubernetes StatefulSets. The StatefulSet’s individual pods each have access to local persistent volumes that follow them even when they are rescheduled. This enables Pods to preserve individual state that is distinct from that of their set neighbors.
Sadly, there is a significant constraint with these volumes: Kubernetes does not offer a means to resize them from the StatefulSet object. The StatefulSet’s volume’s spec.resources.requests.storage property Because the ClaimTemplates field is immutable, you cannot apply any capacity increases you need. You can learn how to avoid the issue by reading this post.
Table of Contents
Creating a StatefulSet
Copy this YAML and save it to ss.yaml
:
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: selector: app: nginx ports: - name: nginx port: 80 clusterIP: None --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 3 serviceName: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - name: web containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
Apply the YAML to your cluster with Kubectl:
$ kubectl apply -f ss.yaml service/nginx created statefulset.apps/nginx created
To run this example, you’ll need a storage class and provisioner in your cluster. A StatefulSet is created and three clones of the NGINX web server are started.
Even while this isn’t an example of when StatefulSets should be utilized, it serves as a good illustration of the volume issues you can encounter. The data directory of NGINX is mounted with a volume claim that has 1 Gi of capacity. As your service grows, your site material can exceed this rather tiny allowance. However, if you attempt to change the volumeClaimTemplates.spec.resources.requests.storage field to 10GI, kubectl apply will display the following error:
$ kubectl apply -f ss.yaml service/nginx unchanged The StatefulSet "nginx" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
Manually Resizing StatefulSet Volumes
By manually adjusting the persistent volume claim’s size, you can get around the restriction (PVC). To release and rebind the volume from your Pods, you must then recreate the StatefulSet. This will start the volume resizing event itself.
Find the PVCs connected to your StatefulSet first using Kubectl:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES data-nginx-0 Bound pvc-ccb2c835-e2d3-4632-b8ba-4c8c142795e4 1Gi RWO data-nginx-1 Bound pvc-1b0b27fe-3874-4ed5-91be-d8e552e515f2 1Gi RWO data-nginx-2 Bound pvc-4b7790c2-3ae6-4e04-afee-a2e1bae4323b 1Gi RWO
The StatefulSet contains three replicas, hence there are three PVCs. Each Pod has an own volume level.
Now, modify each volume’s capacity using kubectl edit:
$ kubectl edit pvc data-nginx-0
The YAML manifest for the PVC will show up in your editor. Locate the spec.resources.requests.storage field, then modify it to the new capacity you want:
# ... spec: resources: requests: storage: 10Gi # ...
Save the document, then exit. Your cluster should now have the change, according to Kubectl.
persistentvolumeclaim/data-nginx-0 edited
Now repeat similar actions for the remaining PVCs in the StatefulSet. The updated size for each of the persistent volumes in your cluster should then be listed:
$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO Delete Bound default/data-nginx-2 pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO Delete Bound default/data-nginx-0 pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO Delete Bound default/data-nginx-1
The claims will maintain the old size for now:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES data-nginx-0 Bound pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO data-nginx-1 Bound pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO data-nginx-2 Bound pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO
This is due to the fact that the volume cannot be changed when Pods are using it.
Recreating the StatefulSet
Release the volume claim from the StatefulSet holding it to finish the resizing. Use the orphan cascading mechanism to keep the StatefulSet’s Pods in your cluster while deleting it. Downtime will be reduced as a result.
$ kubectl delete statefulset --cascade=orphan nginx statefulset.apps "nginx" deleted
Next edit your original YAML file to include the new volume size in the spec.resources.requests.storage
file. Then use kubectl apply
to recreate the StatefulSet in your cluster:
$ kubectl apply -f ss.yaml service/nginx unchanged statefulset.apps/nginx created
The new StatefulSet will assume ownership of the previously orphaned Pods because they’ll already meet its requirements. The volumes may get resized at this point but in most cases you’ll have to manually initiate a rollout that restarts your Pods:
$ kubectl rollout restart statefulset nginx
The rollout proceeds sequentially, targeting one Pod at a time. This ensures your service remains accessible throughout.
Now your PVCs should show the new size:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES data-nginx-0 Bound pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO data-nginx-1 Bound pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO data-nginx-2 Bound pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO
Try connecting to one of your Pods to check the increased capacity is visible from within:
$ kubectl exec -it nginx-0 bash root@nginx-0:/# df -h /usr/share/nginx/html Filesystem Size Used Avail Use% Mounted on /dev/disk/by-id/scsi-0DO_Volume_pvc-33af452d-feff-429d-80cd-a45232e700c1 9.9G 4.5M 9.4G 1% /usr/share/nginx/html
The Pod’s reporting the expected 10 Gi of storage.
Summary
Stateful applications can be run in Kubernetes with persistent storage volumes that are specific to individual Pods thanks to Kubernetes StatefulSets. However, when you need to resize one of your volumes, the flexibility this allows disappears. This is a feature that is currently lacking and must be manually implemented in a specific order.
The problem is known to the Kubernetes maintainers. Initiating volume resizes by changing a StatefulSet’s manifest might soon be possible thanks to an open feature request for a fix. This will be a lot faster and safer than how things are now.
A storage driver that supports dynamic expansion is a must for volume resizes, which is the final caution. Not all drivers, Kubernetes distributions, and cloud platforms will support this capability, which was only made generally accessible in Kubernetes v1.24. Running kubectl get sc and looking for true in the ALLOWVOLUMEXPANSION column of the storage driver you’re using with your StatefulSets will show you whether yours does or not.
We hope the above information has helped you with How to resize a kubernetes statefulset’s volumes bestinau and if not kindly drop a comment