'Kubernetes Dashboard CrashLoopBackOff: timeout error on Raspberry Pi cluster

Should be a simple task, I simply want to run the Kubernetes Dashboard on a clean install of Kubernetes on a Raspberry Pi cluster.

What I've done:

  • Setup the initial cluster (hostname, static ip, cgroup, swapspace, install and configure docker, install kubernetes, setup kubernetes network and join nodes)
  • I have flannel installed
  • I have applied the dashboard
  • Bunch of random testing trying to figure this out

Obviously, as seen below, the container in the dashboard pod is not working because it cannot access kubernetes-dashboard-csrf. I have no idea why this cannot be accessed, my only thought is that I missed a step when setting up the cluster. I've followed about 6 different guides without success, prioritizing the official guide. I have also seen quite a few people having the same or similar issues that most have not posted a resolution. Thanks!

Nodes: kubectl get nodes

NAME      STATUS   ROLES                  AGE    VERSION
gus3      Ready    <none>                 346d   v1.23.1
juliet3   Ready    <none>                 346d   v1.23.1
shawn4    Ready    <none>                 346d   v1.23.1
vick4     Ready    control-plane,master   346d   v1.23.1

All Pods: kubectl get pods --all-namespaces

NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
kube-system            coredns-74ff55c5b-7j2xg                      1/1     Running            27         346d
kube-system            coredns-74ff55c5b-cb2x8                      1/1     Running            27         346d
kube-system            etcd-vick4                                   1/1     Running            2          169m
kube-system            kube-apiserver-vick4                         1/1     Running            2          169m
kube-system            kube-controller-manager-vick4                1/1     Running            2          169m
kube-system            kube-flannel-ds-gclmp                        1/1     Running            0          11m
kube-system            kube-flannel-ds-hshjv                        1/1     Running            0          12m
kube-system            kube-flannel-ds-kdd4w                        1/1     Running            0          11m
kube-system            kube-flannel-ds-wzhkt                        1/1     Running            0          10m
kube-system            kube-proxy-4t25v                             1/1     Running            26         346d
kube-system            kube-proxy-b6vbx                             1/1     Running            26         346d
kube-system            kube-proxy-jgj4s                             1/1     Running            27         346d
kube-system            kube-proxy-n65sl                             1/1     Running            26         346d
kube-system            kube-scheduler-vick4                         1/1     Running            2          169m
kubernetes-dashboard   dashboard-metrics-scraper-5b8896d7fc-99wfk   1/1     Running            0          77m
kubernetes-dashboard   kubernetes-dashboard-897c7599f-qss5p         0/1     CrashLoopBackOff   18         77m

Resources: kubectl get all -n kubernetes-dashboard

NAME                                             READY   STATUS             RESTARTS   AGE
pod/dashboard-metrics-scraper-5b8896d7fc-99wfk   1/1     Running            0          79m
pod/kubernetes-dashboard-897c7599f-qss5p         0/1     CrashLoopBackOff   19         79m

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   172.20.0.191   <none>        8000/TCP   79m
service/kubernetes-dashboard        ClusterIP   172.20.0.15    <none>        443/TCP    79m

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           79m
deployment.apps/kubernetes-dashboard        0/1     1            0           79m

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-5b8896d7fc   1         1         1       79m
replicaset.apps/kubernetes-dashboard-897c7599f         1         1         0       79m

Notice CrashLoopBackOff

Pod Details: kubectl describe pods kubernetes-dashboard-897c7599f-qss5p -n kubernetes-dashboard

Name:         kubernetes-dashboard-897c7599f-qss5p
Namespace:    kubernetes-dashboard
Priority:     0
Node:         shawn4/192.168.10.71
Start Time:   Fri, 17 Dec 2021 18:52:15 +0000
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=897c7599f
Annotations:  <none>
Status:       Running
IP:           172.19.1.75
IPs:
  IP:           172.19.1.75
Controlled By:  ReplicaSet/kubernetes-dashboard-897c7599f
Containers:
  kubernetes-dashboard:
    Container ID:  docker://894a354e40ca1a95885e149dcd75415e0f186ead3f2e05ec0787f4b1c7a29622
    Image:         kubernetesui/dashboard:v2.4.0
    Image ID:      docker-pullable://kubernetesui/dashboard@sha256:526850ae4ea9aba360e72b6df69fd3126b129d446efe83ac5250282b85f95b7f
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Fri, 17 Dec 2021 20:10:19 +0000
      Finished:     Fri, 17 Dec 2021 20:10:49 +0000
    Ready:          False
    Restart Count:  19
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-wq9m8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-wq9m8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-wq9m8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                  From     Message
  ----     ------   ----                 ----     -------
  Warning  BackOff  21s (x327 over 79m)  kubelet  Back-off restarting failed container

Logs: kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p

2021/12/17 20:10:19 Starting overwatch
2021/12/17 20:10:19 Using namespace: kubernetes-dashboard
2021/12/17 20:10:19 Using in-cluster config to connect to apiserver
2021/12/17 20:10:19 Using secret token for csrf signing
2021/12/17 20:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://172.20.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 172.20.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0x400055fae8)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x350
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0x40001fc080)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0x8c
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x40001fc080)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x40
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
main.main()
        /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1dc

If you need any more information please ask!

UPDATE 12/29/21: Fixed this issue by reinstalling the cluster to the newest versions of Kubernetes and Ubuntu.



Solution 1:[1]

Turned out there were several issues:

  • I was using Ubuntu Buster which is deprecated.
  • My client/server Kubernetes versions were +/-0.3 out of sync
  • I was following outdated instructions

I reinstalled the whole cluster following Kubernetes official guide and, with a few snags along the way, it works!

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Matthew Vine