'Check failed pods logs in a Kubernetes cluster

I have a Kubernetes cluster, in which different pods are running in different namespaces. How do I know if any pod failed?

Is there any single command to check the failed pod list or restated pod list?

And reason for the restart(logs)?



Solution 1:[1]

Depends if you want to have detailed information or you just want to check a few last failed pods.

I would recommend you to read about Logging Architecture.

In case you would like to have this detailed information you should use 3rd party software, as its described in Kubernetes Documentation - Logging Using Elasticsearch and Kibana or another one FluentD.

If you are using Cloud environment you can use Integrated with Cloud Logging tools (i.e. in Google Cloud Platform you can use Stackdriver).

In case you want to check logs to find reason why pod failed, it's good described in K8s docs Debug Running Pods.

If you want to get logs from specific pod

$ kubectl logs ${POD_NAME} -n {NAMESPACE}

First, look at the logs of the affected container:

$ kubectl logs ${POD_NAME} ${CONTAINER_NAME} 

If your container has previously crashed, you can access the previous container's crash log with:

$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}

Additional information you can obtain using

$ kubectl get events -o wide --all-namespaces | grep <your condition>

Similar question was posted in this SO thread, you can check if for more details.

Solution 2:[2]

This'll work: kubectl get pods --all-namespaces | | grep -Ev '([0-9]+)/\1'

Also, Lens is pretty good in these situations.

Solution 3:[3]

Most of the times, the reason for app failure is printed in the lasting logs of the previous pod. You can see them by simply putting --previous flag along with your kubectl logs ... cmd.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 PjoterS
Solution 2 Yarden Shoham
Solution 3 Ted