'Kubernetes- Tolerations : node.kubernetes.io/unreachable:NoExecute for 300s

I'm new to Kubernetes and I'm struggling with few error. I want to create Kubernetes Cluster on my local system(mac).

My deployment.yaml --

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sv-premier
spec:
  selector:
    matchLabels:
      app: sv-premier
  template:
    metadata:
      labels:
        app: sv-premier
    spec:
      volumes:
      - name: google-cloud-key
        secret:
          secretName: gcp-key
      containers:
      - name: sv-premier
        image: gcr.io/proto/premiercore1:latest
        imagePullPolicy: Always
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo Done Deploying sv-premier; sleep 3600;done"]
        volumeMounts:
        - name: google-cloud-key
          mountPath: /var/secrets/google
        env:
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: /var/secrets/google/key.json
        ports:
        - containerPort: 8080
      imagePullSecrets:
      - name: imagepullsecretkey

I created deployment as - kubectl apply -f deployment.yaml

kubectl get pods

NAME                          READY   STATUS    RESTARTS   AGE
sv-premier-5cc8f599f6-9lrtq   1/1     Running   0          11s

kubectl describe pods sv-premier-5cc8f599f6-9lrtq

Name:           sv-premier-5cc8f599f6-9lrtq
Namespace:      default
Priority:       0
Node:           docker-desktop/192.168.65.3
Start Time:     Tue, 11 Feb 2020 19:04:21 +0530
Labels:         app=sv-premier
                pod-template-hash=5cc8f599f6
Annotations:    <none>
Status:         Running
IP:             10.1.0.54
IPs:            <none>
Controlled By:  ReplicaSet/sv-premier-5cc8f599f6
Containers:
  sv-premier:
    Container ID:  docker://b8993b4fc43197947649c7409b37e6d381a8d4cbbe56e550bca83931747ddd3e
    Image:         gcr.io/proto/premiercore1:latest
    Image ID:      docker-pullable://gcr.io/proto/premiercore1@sha256:664778c72c3f79147c4c5b73914292a124009591f479a5e3acf42c444eb62860
    Port:          4343/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
    Args:
      -c
      while true; do echo Done Deploying sv-premier; sleep 3600;done
    State:          Running
      Started:      Tue, 11 Feb 2020 19:04:24 +0530
    Ready:          True
    Restart Count:  0
    Environment:
      GOOGLE_APPLICATION_CREDENTIALS:  /var/secrets/google/key.json
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s4jgd (ro)
      /var/secrets/google from google-cloud-key (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  google-cloud-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gcp-key
    Optional:    false
  default-token-s4jgd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s4jgd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                     Message
  ----    ------     ----  ----                     -------
  Normal  Scheduled  67s   default-scheduler        Successfully assigned default/sv-premier-5cc8f599f6-9lrtq to docker-desktop
  Normal  Pulling    66s   kubelet, docker-desktop  Pulling image "gcr.io/proto/premiercore1:latest"
  Normal  Pulled     64s   kubelet, docker-desktop  Successfully pulled image "gcr.io/proto/premiercore1:latest"
  Normal  Created    64s   kubelet, docker-desktop  Created container sv-premier
  Normal  Started    64s   kubelet, docker-desktop  Started container sv-premier

Why I am getting this --

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s

Somebody more experienced than me kindly help



Solution 1:[1]

Note Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable.

Kubernetes automatically adds a toleration for node.kubernetes.io/not-ready with tolerationSeconds=300 unless the pod configuration provided by the user already has a toleration for node.kubernetes.io/not-ready. Likewise it adds a toleration for node.kubernetes.io/unreachable with tolerationSeconds=300 unless the pod configuration provided by the user already has a toleration for node.kubernetes.io/unreachable.

These automatically-added tolerations ensure that the default pod behavior of remaining bound for 5 minutes after one of these problems is detected is maintained.

Read Complete details here

The following taints are built in:

node.kubernetes.io/not-ready: Node is not ready. This corresponds to the NodeCondition Ready being “False”.

node.kubernetes.io/unreachable: Node is unreachable from the node controller. This corresponds to the NodeCondition Ready being “Unknown”.

More as below :

node.kubernetes.io/out-of-disk: Node becomes out of disk.

node.kubernetes.io/memory-pressure: Node has memory pressure.

node.kubernetes.io/disk-pressure: Node has disk pressure.

node.kubernetes.io/network-unavailable: Node’s network is unavailable.

node.kubernetes.io/unschedulable: Node is unschedulable.

node.cloudprovider.kubernetes.io/uninitialized: When the kubelet is started with “external” cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.

Solution 2:[2]

I have a cluster with a master and two worker and I had the same problem.

Taints:             node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unreachable:NoSchedule
                    node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true

the CR of the node should've change from docker to containerd, then I got this.

here is "events" part of kubeclt describe node worker01

Events:
  Type    Reason                   Age                 From        Message
  ----    ------                   ----                ----        -------
  Normal  Starting                 47h                 kube-proxy  
  Normal  Starting                 38m                 kube-proxy  
  Normal  Starting                 47h                 kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  47h                 kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  47h (x3 over 47h)   kubelet     Node worker01 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    47h (x3 over 47h)   kubelet     Node worker01 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     47h (x3 over 47h)   kubelet     Node worker01 status is now: NodeHasSufficientPID
  Normal  NodeReady                47h (x2 over 47h)   kubelet     Node worker01 status is now: NodeReady
  Normal  NodeHasSufficientMemory  77m (x10 over 78m)  kubelet     Node worker01 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    77m (x10 over 78m)  kubelet     Node worker01 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     77m (x10 over 78m)  kubelet     Node worker01 status is now: NodeHasSufficientPID

output of kubectl get node -A -o wide

master     Ready                         control-plane,master   4d2h   v1.23.5   192.168.42.10    <none>        Ubuntu 20.04.4 LTS   5.4.0-105-generic   containerd://1.5.5
worker01   NotReady,SchedulingDisabled   <none>                 47h    v1.23.5   192.168.42.100   <none>        Ubuntu 20.04.4 LTS   5.4.0-105-generic   docker://20.10.14
worker02   Ready                         <none>                 47h    v1.23.5   192.168.42.200   <none>        Ubuntu 20.04.4 LTS   5.4.0-107-generic   containerd://1.5.5

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 DT.
Solution 2