'Kubernetes MetalLB External IP not reachable

I can't access to Network IP assigned by MetalLB load Balancer

I created a Kubernetes cluster in k3s. Its 1 master and 1 workers. Each one has its own Private IP.

Master 192.168.0.13

Worker 192.168.0.13

I Installed k3s with INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik"

Now I am trying to deploy a app using MetalLB and nginx ingress

  --set configInline.address-pools[0].name=default \
  --set configInline.address-pools[0].protocol=layer2 \
  --set configInline.address-pools[0].addresses[0]=192.168.0.21-192.168.0.30
helm install nginx-ingress stable/nginx-ingress --namespace kube-system \
    --set controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller\
    --set controller.image.tag=0.30.0 \
    --set controller.image.runAsUser=33 \
    --set defaultBackend.enabled=false

I Can see every pod up and running

NAME                                             READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
coredns-d798c9dd-lsdnp                           1/1     Running   5          37h    10.42.0.25     c271-k3s-ocrh    <none>           <none>
local-path-provisioner-58fb86bdfd-bcpl7          1/1     Running   5          37h    10.42.0.22     c271-k3s-ocrh    <none>           <none>
metrics-server-6d684c7b5-v9tmh                   1/1     Running   5          37h    10.42.0.24     c271-k3s-ocrh    <none>           <none>
metallb-speaker-4kbmw                            1/1     Running   0          4m7s   192.168.0.14   c271-k3s-agent   <none>           <none>
metallb-controller-75bf779d4f-nb47l              1/1     Running   0          4m7s   10.42.1.45     c271-k3s-agent   <none>           <none>
metallb-speaker-776p9                            1/1     Running   0          4m7s   192.168.0.13   c271-k3s-ocrh    <none>           <none>
nginx-ingress-default-backend-5b967cf596-554bq   1/1     Running   0          98s    10.42.1.46     c271-k3s-agent   <none>           <none>
nginx-ingress-controller-674675d5b6-blndp        1/1     Running   0          98s    10.42.1.47     c271-k3s-agent   <none>           <none>

App getting IP 192.168.0.21

❯ kubectl get services  -n kube-system -l app=nginx-ingress -o wide
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE    SELECTOR
nginx-ingress-default-backend   ClusterIP      10.43.170.195   <none>         80/TCP                       112s   app=nginx-ingress,component=default-backend,release=nginx-ingress
nginx-ingress-controller        LoadBalancer   10.43.220.166   192.168.0.21   80:31735/TCP,443:31566/TCP   111s   app=nginx-ingress,component=controller,release=nginx-ingress

I Can access the app from master and worker by curl to nginx controller pod

HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sat, 21 Mar 2020 10:43:34 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive

But the IP is not accessible from local 192.168.0.21

Diagnosis : DHCP is on, and 192.168.0.21-192.168.0.30 is absolutely free., When i try to allocate 192.168.0.21 to master or agent by netplan config they get the IP.

Please Guide me, What i am missing.



Solution 1:[1]

You need to make sure that the source IP address (external-ip assigned by metallb) is preserved. To achieve this, set the value of the externalTrafficPolicy field of the ingress-controller Service spec to Local. For example

apiVersion: v1
kind: Service
metadata:
  name: my-app
  labels:
    helm.sh/chart: webapp-0.1.0
    app.kubernetes.io/name: webapp
    app.kubernetes.io/instance: my-app
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: webapp
    app.kubernetes.io/instance: my-app
  externalTrafficPolicy: Local

The default value for externalTrafficPolicy field is 'Cluster'. So change it to Local

Solution 2:[2]

In my setup with Cilium and HAProxy ingress controller I'd to change externalTrafficPolicy from Local to Cluster

kubectl --namespace ingress-controller patch svc haproxy-ingress \
 -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Muhammad Arslan Akhtar
Solution 2 Oleg Neumyvakin