'kubectl scale removing pods with latest image and keeping old ones
So I have a deployment job that runs some kubectl commands. A configuration file is modified to use the latest docker image SHA, and I run these commands to deploy it out:
kubectl apply -f myconfig-file.yaml
#we have to scale up temporarily due to reasons beyond the purview of this question
kubectl scale -f myconfig-file.yaml --replicas=4
#* wait a minute or so *
sleep 60
kubectl scale -f myconfig-file.yaml --replicas=2
Apply correctly updates the replicationcontroller definition on Google Cloud to be pointed at the latest image, but the original containers still remain. Scaling up DOES create containers with the correct image, but once I scale down, it removes the newest containers, leaving behind the old containers with the old image.
I have confirmed that:
- The new containers with their new image are working as expected.
- I ended up doing the deployment manually and manually removed the old containers, (and k8s correctly created two new containers with the latest image) and when I scaled down, the new containers with the new images stuck around. My application worked as expected.
My yaml file in question:
apiVersion: v1
kind: ReplicationController
metadata:
name: my-app-cluster
spec:
replicas: 2
template:
metadata:
labels:
run: my-app-cluster
spec:
containers:
- image: mySuperCoolImage@sha256:TheLatestShaHere
imagePullPolicy: Always
name: my-app-cluster
ports:
- containerPort: 8008
protocol: TCP
terminationGracePeriodSeconds: 30
I'm using the Google Cloud K8s FWIW. Do I need to do something in the YAML file to instruct k8s to destroy the old instances?
Solution 1:[1]
So it looks like the majority of my problem stem from the fact that I'm using a ReplicationController instead of a more supported Deployment or a ReplicaSet. Unfortunately, at this time, I'll need to consult with my team on the best way to migrate to that format since there are some considerations we have.
In the meantime, I fixed the issue with this little hack.
oldPods=($(kubectl get pods | grep myPodType | perl -wnE'say /.*-myPodType-\w*/g'))
#do the apply and scale thing like listed above
#then delete
for item in ${oldPods[@]}; do kubectl delete pod $item; done
Solution 2:[2]
when you update deployment definition with new image tag, you do not need to scale up/down. Deployment should start new pods and delete pods with old image on it's own, using scaling only complicates the process, and as you noticed it does not take the pod spec change into account when it decides which pods to delete.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Another Stackoverflow User |
| Solution 2 | Radek 'Goblin' Pieczonka |
