'My curl request cannot reach to deployment's pods through nodeport

I'm noob in k8s, and trying to study them on AWS ec2 instance.

my plan was create deployment using nginx container, create service, and forwarding external request to nginx container.

for my plan, first i writing my YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name:  nginx-deploy
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name:  nginx-container
        image:  nginx:latest
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort:  80
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
          app: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
  - protocol: TCP
    port: 80
    nodePort: 30080
    targetPort: 80

and try to request to NodePort 30080, but my service cannot forwarding request NodePort to Pod. they just do retrial forever.

root@master:~# curl -X GET http://localhost:30080 -v
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying 127.0.0.1:30080...
* TCP_NODELAY set
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address
*   Trying ::1:30080...
* TCP_NODELAY set
* Immediate connect fail for ::1: Cannot assign requested address

many referrences just say "match labels both service and deployment", but my all instances are attaching label "app=nginx".

root@master:~# k get po -l app=nginx
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-6bdc4445fd-7t649   1/1     Running   0          4m27s
nginx-deploy-6bdc4445fd-p4cmf   1/1     Running   0          4m27s
nginx-deploy-6bdc4445fd-t4sm9   1/1     Running   0          4m27s
root@master:~# k get deploy -l app=nginx
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   3/3     3            3           4m49s
root@master:~# k get svc -l app=nginx
NAME        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-svc   NodePort   10.108.43.218   <none>        80:30080/TCP   7m21s

i have no idea to trying...

here is my kubernetes version:

root@master:~# k version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:52:18Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}

and my ACG allow connections from 30000-32767.

Why my curl cannot reach to pods' port 80?



Solution 1:[1]

Hello, hope you are enjoying your Kubernetes journey !

I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:

First I have setup a kind cluster locally with this configuration (info here: https://kind.sigs.k8s.io/docs/user/quick-start/):

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
  image: kindest/node:v1.23.5
- role: control-plane
  image: kindest/node:v1.23.5
- role: control-plane
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5

after this I created my cluster with this command:

kind create cluster --config=config.yaml

Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):

apiVersion: v1
kind: Namespace
metadata:
  name: so-tests

From there, i got my environment setted up, so i copied your manifests and deployed them directly with:

kubectl apply -f manifest.yaml

Then, I began by making sure that your nginx was actually running well with a simple port-forward to my host machine:

? k port-forward pod/nginx-deploy-6bdc4445fd-bfd6j 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080

good:

enter image description here

Then I tried the same port-forward but from the service now:

? k port-forward service/nginx-svc 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080

same result -> your pod and service are correctly working.

now i had to check on which of my worker nodes your first nginx pod was running, by running this:

? kubectl get po -o wide -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE                   NOMINATED NODE   READINESS GATES
nginx-deploy-6bdc4445fd-bfd6j       1/1     Running   0          7m51s   10.244.4.9    so-cluster-1-worker    <none>           <none>
nginx-deploy-6bdc4445fd-nsdvn       1/1     Running   0          7m51s   10.244.3.11   so-cluster-1-worker3   <none>           <none>
nginx-deploy-6bdc4445fd-qvsm6       1/1     Running   0          7m51s   10.244.5.7    so-cluster-1-worker2   <none>           <none>

Here, we are going to continue with the first pod: nginx-deploy-6bdc4445fd-bfd6j
We know that its running on node so-cluster-1-worker

but since my docker ports (ports of the kubernetes workers) were not mapped to my host machine, i decided to delete my cluster with:

kind delete cluster --name=so-cluster-1

and to rebuild it with this configuration:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1

nodes:
- role: control-plane
  image: kindest/node:v1.23.5
- role: control-plane
  image: kindest/node:v1.23.5
- role: control-plane
  image: kindest/node:v1.23.5
- role: worker
  image: kindest/node:v1.23.5
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 30080
    hostPort: 30080
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  image: kindest/node:v1.23.5
  extraPortMappings:
  - containerPort: 30080
    hostPort: 30081
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  image: kindest/node:v1.23.5
  extraPortMappings:
  - containerPort: 30080
    hostPort: 30082
    protocol: TCP

by running:

kind create cluster --config=config.yaml

(since all my nodes are actually containers running on the same host (my computer) I decided to change the port mapping of the 30080 to 30080 to worker-1, 30081 to worker-2 30082 to worker-3 but you dont have the problem since you are executing kubernetes in 3 seperated nodes with their own ports.)

When my cluster was up and running again, I reinstalled my namespace and redeployed your deployment.

I can confirm the port mapping by running:

? sudo docker ps
CONTAINER ID   IMAGE                                COMMAND                  CREATED         STATUS         PORTS                       NAMES
907511eb0f4a   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   8 minutes ago   Up 7 minutes   0.0.0.0:30082->30080/tcp    so-cluster-1-worker3
368688d93b14   kindest/haproxy:v20220207-ca68f7d4   "haproxy -sf 7 -W -d…"   8 minutes ago   Up 8 minutes   127.0.0.1:39151->6443/tcp   so-cluster-1-external-load-balancer
223a8bca5925   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   8 minutes ago   Up 7 minutes   127.0.0.1:41099->6443/tcp   so-cluster-1-control-plane3
4c0c540959c4   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   8 minutes ago   Up 7 minutes   0.0.0.0:30081->30080/tcp    so-cluster-1-worker2
95cc8448a015   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   8 minutes ago   Up 7 minutes   0.0.0.0:30080->30080/tcp    so-cluster-1-worker
d1f83e592677   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   8 minutes ago   Up 7 minutes   127.0.0.1:40311->6443/tcp   so-cluster-1-control-plane2
a8961ee82f33   kindest/node:v1.23.5                 "/usr/local/bin/entr…"   8 minutes ago   Up 7 minutes   127.0.0.1:40879->6443/tcp   so-cluster-1-control-plane

so here is what I got now:

Every 1.0s: kubectl get po,svc,cm,sts,secret -o wide                                                                                             DESKTOP-6PBJAOK: Fri Apr 15 02:03:10 2022

NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE                   NOMINATED NODE   READINESS GATES
pod/nginx-deploy-6bdc4445fd-668dd   1/1     Running   0          2m37s   10.244.5.2   so-cluster-1-worker3   <none>           <none>
pod/nginx-deploy-6bdc4445fd-nmxjk   1/1     Running   0          2m37s   10.244.3.2   so-cluster-1-worker    <none>           <none>
pod/nginx-deploy-6bdc4445fd-z677k   1/1     Running   0          2m37s   10.244.4.2   so-cluster-1-worker2   <none>           <none>

NAME                TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/nginx-svc   NodePort   10.96.249.248   <none>        80:30080/TCP   2m37s   app=nginx

so now, when i go to my browser on my host and type localhost:30080, here is what i got:

? curl localhost:30080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

PERFECT !

To conlude, I can assure you that your configuration is 100% accurate, since I didnt change anything in it. So what now? you have to ensure that you nodeports are open (iptable or localnode firewalling) if they are, check try to access them from another machine (or a port-forwarding) by "telneting" their IP:PORT socket. Check you Security groups in AWS, and do some network troubleshooting ^^

You have done well for a noob ! Good Job, Bguess

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 bguess