'Active MQ in HA Shared Database (Master/Slave) on Kubernetes with StatefulSet
I am in the process of deploying ActiveMQ 5.15 in HA on Kubernetes. Previously I was using a deployment and a clusterIP Service. And it was working fine. The master will boot up and the slave will wait for the lock to be acquired. If I delete the pod which is the master one, the slave picks up and becomes the master.
Now I want to try with statefulset basing myself on this thread.
Deployment is done successfully and two pods were created with id0 and id1. But what I noticed is that both pods were master. They were both started. I noticed also that two PVC were created id0 and id1 in the case of Statefulset compared to deployment which had only 1 PVC. Could that be the issue since it is no more a shared storage? Can we still achieve a master/slave setup with Statefulset?
Solution 1:[1]
I noticed also that two PVC were created id0 and id1 in the case of statefulset compared to deployment which had only 1 PVC. Could that be the issue since it is no more a shared storage?
You are right. When using k8s StatefulSets each Pod gets its own persistent storage (dedicated PVC and PV), and this persistent storage is not shared.
When a Pod gets terminated and is rescheduled on a different Node, the Kubernetes controller will ensure that the Pod is associated with the same PVC which will guarantee that the state is intact.
In your case, to achieve a master/slave setup, consider using a shared network location / filesystem for persistent storage like:
- NFS storage for on-premise k8s cluster.
- AWS EFS for EKS.
- or Azure Files for AKS.
Check the complete list of PersistentVolume types currently supported by Kubernetes (implemented as plugins).
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
