'"kubectl get nodes" shows NotReady always even after giving the appropriate IP

i am trying to setup a kubernetes cluster for testing purpose with a master and one minion. When i run the kubectl get nodes it always says NotReady. Following the configuration on minion in /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=centos-minion"
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
KUBELET_ARGS=""

When kubelete service is started following logs could be seen

Mar 16 13:29:49 centos-minion kubelet: E0316 13:29:49.126595 53912 event.go:202] Unable to write event: 'Post http://centos-master:8080/api/v1/namespaces/default/events: dial tcp 10.143.219.12:8080: i/o timeout' (may retry after sleeping)

Mar 16 13:16:01 centos-minion kube-proxy: E0316 13:16:01.195731 53595 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp [::1]:8080: getsockopt: connection refused' (may retry after sleeping)

Following is the config on master /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

/etc/kubernetes/config

KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"

On master following processes are properly running

kube 5657 1 0 Mar15 ? 00:12:05 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://centos-master:2379 --address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16

kube 5690 1 1 Mar15 ? 00:16:01 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://centos-master:8080

kube 5723 1 0 Mar15 ? 00:02:23 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://centos-master:8080

So i still do not know what is missing.



Solution 1:[1]

I was having the same issue when I setting up the kubernetes with fedora following the steps on kubernetes.io. In the tutorial, it's commenting out KUBELET_ARGS="--cgroup-driver=systemd" in node's /etc/kubernetes/kubelet, if you uncomment it, you will see the node status become ready. Hope this help

Solution 2:[2]

rejoin the worker nodes to the master.

My install is on three physical machines. One master and two workers. All needed reboots.

you will need your join token, which you probably don't have:

sudo kubeadm token list

copy the TOKEN field data, the output looks like this (no, that's not my real one):

TOKEN ow3v08ddddgmgzfkdkdkd7 18h 2018-07-30T12:39:53-05:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

THEN join the cluster here. Master node IP is the real IP address of your machine:

sudo kubeadm join --token <YOUR TOKEN HASH> <MASTER_NODE_IP>:6443 --discovery-token-unsafe-skip-ca-verification

Solution 3:[3]

Have to restart kubelet service in node (systemctl enable kubelet & systemctl restart kubelet). Then you can see your node is in "Ready" status.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 QiZ
Solution 2 texasdave
Solution 3 sundeeplp