'Kubernetes Readiness probe failed error

While running my container on kubernetes using helm upgrade command, I am getting this error:

'Readiness probe failed: Get http://172.17.0.6:3003/: dial tcp 172.17.0.6:3003: getsockopt: connection refused'.

My docker image is for a node.js application and I am trying to manage it through minikube.



Solution 1:[1]

For anyone else here, if using helm to manage your deployments, you need to set initialDelaySeconds it in the deployments.yaml template in the /templates folder under livenessProbe. The livenessProbe will force restart your pod if the probe cannot connect, as was happening with mine. It wasn't giving my application enough time to build.

Solution 2:[2]

This could be solved by increasing the initial delay in the readiness check. Actually since the connection to the DB was taking more then the initial delay as a result of which the readiness probe was failing.

Solution 3:[3]

Helm:

I would recommend setting the initialDelaySeconds value in the values.yaml file and use an action {{ .Values.initialDelaySeconds }} to insert the value into the deployment.yaml template.

kubectl:

Just add the initialDelaySeconds: 5 if you want 5 seconds to your (deployment,pod,replicate set etc) manifest and apply your changes.

If it fails, get coffee and start looking at logs from the container

kubectl logs -h for more help

Solution 4:[4]

I had this issue too. It was fixed by specifying that my docker image listen on host 0.0.0.0 with the command set in my dockerfile: ENV HOST '0.0.0.0'.

When deploying to a Docker, and potentially other, containers, it is advisable to listen on 0.0.0.0 because they do not default to exposing mapped ports to localhost.

Solution 5:[5]

Update coredns image from v1.5.0 to current version v1.6.9, then the error got fixed.

Solution 6:[6]

Helm & NodeJs :

For my Node.js app, 5s of initialDelaySeconds wasn't the exact solution.

I solved the issue by using this health-connect library from @cloudnative.

Just follow the instruction (README) to put some code importing the package and setting the app to respond to the liveness and readiness check request.

And make sure you write the correct path and port for both livenessProbe and readinessProbe in /templates - deployment.yaml.

  livenessProbe:
     initialDelaySeconds: {{ .Values.initialDelaySeconds }}
     httpGet:
       path: /live
       port: 3000
  readinessProbe:
     initialDelaySeconds: {{ .Values.initialDelaySeconds }}
     httpGet:
       path: /ready
       port: 3000

reference: https://developer.ibm.com/tutorials/health-checking-kubernetes-nodejs-application/

Solution 7:[7]

I had the same issue with a .NET 6 Web API and the solution was updating docker base image in Dockerfile as below;

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS base

to

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base

Also, if you deploy your application with helm, it is better to make sure exposed application ports in docker compose file match the application's own ports.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Dane Jordan
Solution 2 Karan Sodhi
Solution 3 Margach Chris
Solution 4 an1waukoni
Solution 5 NOZUONOHIGH
Solution 6
Solution 7 Hasan