'Requets not redirected in new Pods which HPA enable
I'm trying to implement HorizontalPodAutoscalling on my Django API.
For this I have an Nginx-ingress redirecting request to a Service of type ClusterIP. There's an HPA running and sets to scale up when Memory reaches above 50%.
When I simulate a workload, HPA does kick in and creates news Pods. But requests still goes to the first Pod. The python process dies because of OOM in the first pod. Logs are only in the first pod too.
I don't understand why my Service isn't redirecting requets to new fresh pods?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
name: "{{ k8s.deploy.name }}"
namespace: "{{ namespace }}"
spec:
progressDeadlineSeconds: {{ k8s.deploy.progressdeadlineseconds }}
replicas: {{ k8s.deploy.replicas }}
selector:
matchLabels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
template:
metadata:
labels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
containers:
- name: "{{ k8s.deploy.name }}"
image: my_image
stdin: {{ k8s.deploy.container.stdin }}
tty: {{ k8s.deploy.container.tty }}
imagePullPolicy: "{{ k8s.deploy.imagepullpolicy }}"
ports:
- containerPort: {{ k8s.deploy.container.port }}
name: httpport
resources:
limits:
cpu: "{{ k8s.deploy.containers.resources.limits.cpu }}"
memory: "{{ k8s.deploy.containers.resources.limits.memory }}"
requests:
cpu: "{{ k8s.deploy.containers.resources.requests.cpu }}"
memory: "{{ k8s.deploy.containers.resources.requests.memory }}"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
name: "{{ k8s.deploy.nginx.ingress.name }}"
namespace: "{{ namespace }}"
spec:
tls:
- hosts:
- {{ k8s.fqdn.host }}
secretName: "{{ k8s.deploy.tls.secret.name }}"
rules:
- host: {{ k8s.fqdn.host }}
http:
paths:
- backend:
serviceName: {{ k8s.deploy.svc.name }}
servicePort: {{ k8s.svc.http.port }}
path: /
pathType: "{{ k8s.ingress.http.pathtype }}"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
name: "{{ k8s.deploy.svc.name }}"
namespace: "{{ namespace }}"
spec:
ports:
- name: httpport
port: {{ k8s.svc.http.port }}
protocol: TCP
targetPort: {{ k8s.deploy.container.port }}
selector:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
sessionAffinity: None
type: "{{ k8s.svc.type }}"
I'm not sure if in the Service declaration I should use a selector, maybe that's the issue? The service is bouded to one pod with the selector?
At first I thought I was lacking readinessprobe, but I doubt it is that.
An other information, is when I launch one request to my API, the HPA is triggered (which is normal), but after 5 minutes (the default scale down of HPA I think), I get a 502 from Ngnix.
Also I don't get the difference between ClusterIP and LoadBalancer, I tried to user a LoadBalancer, but the service doesnt deploy at all.
Edit 1:
kubectl describe service -n appone appone-api-test-poc-svc
Name: appone-api-test-poc-svc
Namespace: appone
Labels: app=appone-api-test-poc
team=datalab
Annotations: <none>
Selector: app=appone-api-test-poc,team=datalab
Type: ClusterIP
IP: 172.2.3.4
Port: httpport 8443/TCP
TargetPort: 8443/TCP
Endpoints: 172.2.5.6:8443,172.2.5.7:8443
Session Affinity: None
Events: <none>
Both TP and Endpoints were changed.
Edit 2:
> kubectl get svc -n appone
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
appone-api-test-poc-svc ClusterIP 172.2.3.4 <none> 8443/TCP 17h
> kubectl get pods -n appone -l app=appone-api-test-poc -o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
172.2.5.6
172.2.5.7
Solution 1:[1]
The file configuration from my post is correct and working.
The issue was that I used cookies with the ingress. So with postman, it kept the information of my pod and kept seending on the same pod.
If I would delete cookies from postman or launch a query from an other computer, it would work perfeclty.
This is the correct version of the nginx-ingress i'm using now.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
name: "{{ k8s.deploy.nginx.ingress.name }}"
namespace: "{{ namespace }}"
spec:
tls:
- hosts:
- {{ k8s.fqdn.host }}
secretName: "{{ k8s.deploy.tls.secret.name }}"
rules:
- host: {{ k8s.fqdn.host }}
http:
paths:
- backend:
serviceName: {{ k8s.deploy.svc.name }}
servicePort: {{ k8s.svc.http.port }}
path: /
pathType: "{{ k8s.ingress.http.pathtype }}"
This was the original configuration of nginx-ingress that keeps queries to the same Pod because of sticky session/cookies:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: "{{ k8s.deploy.labels.app }}"
team: "{{ k8s.deploy.labels.team }}"
name: "{{ k8s.deploy.nginx.ingress.name }}"
namespace: "{{ namespace }}"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "{{ k8s.ingress.class }}"
nginx.ingress.kubernetes.io/affinity: "{{ k8s.ingress.affinity }}"
nginx.ingress.kubernetes.io/affinity-mode: "{{ k8s.ingress.affinity.mode }}"
nginx.ingress.kubernetes.io/session-cookie-name: "{{ k8s.ingress.session.cookie.name }}"
nginx.ingress.kubernetes.io/session-cookie-hash: "{{ k8s.ingress.session.cookie.hash }}"
nginx.ingress.kubernetes.io/session-cookie-expires: "{{ k8s.ingress.session.cookie.expires }}"
nginx.ingress.kubernetes.io/session-cookie-max-age: "{{ k8s.ingress.session.cookie.maxage }}"
spec:
tls:
- hosts:
- {{ k8s.fqdn.host }}
secretName: "{{ k8s.deploy.tls.secret.name }}"
rules:
- host: {{ k8s.fqdn.host }}
http:
paths:
- backend:
serviceName: {{ k8s.deploy.svc.name }}
servicePort: {{ k8s.svc.http.port }}
path: /
pathType: "{{ k8s.ingress.http.pathtype }}"
I removed all annotations and it works fine now. The service ClusterIP is properly redirecting queries to new pods.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | BeGreen |
