'Kubernetes Cluster- Worker node error while using kubectl

I have created a Kubernetes cluster with 2 nodes, one Master node and one Worker node (2 different VMs).

The worker node has joined the cluster successfully, so when I run the commanad: kubectl get nodes in my master node it appears the 2 nodes exists in the cluster!

However, when I run the command kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml from my worker node terminal, in order to create a deployment in the worker node, I have the following error:

The connection to the server localhost:8080 was refused. - did you specify the right host or port?

Any help what is going on here?



Solution 1:[1]

The easy way to do it is to copy the config from master node usually found here : /etc/kubernetes/admin.conf , to whetever node you want to configure kubectl ( even on master node) . The location to be copied is : $HOME/.kube/config

Also, you can this command from master node by specify nodeselector or label.

Assign POD node

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Anurag Arya