'EKS Nodes not Registering in Target Group using ALB Ingress Controller
I have followed the below AWS document to create an ALB ingress controller;
https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-setup/
EKS: version: 1.19
All the services are created successfully, with no errors.
But unfortunately, the nodes are not registered in the target groups of the ALB.
I also tried the alb ingress controller with a different version, but the same issue found.
used the example application;
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.1.3/docs/examples/2048/2048_full.yaml
Output below;
Ingress -->
[centos@ip-10-1-68-249 alb-controller]$ kubectl get ing -n game-2048 -o wide
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-2048 <none> * k8s-game2048-ingress2-253e697ad8-1355143956.us-east-1.elb.amazonaws.com 80 81s
TargetGroupBinding -->
[centos@ip-10-1-68-249 alb-controller]$ kubectl get TargetGroupBinding -n game-2048 -o wide
NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE ARN AGE
k8s-game2048-service2-3c0ccb9f36 service-2048 80 ip arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:targetgroup/k8s-game2048-service2-3c0ccb9f36/faa10866343a792f 3m30s
but the instance is not attached to the target group;
Could anyone support here
Solution 1:[1]
I have tried with an alternative option,
describe the targetgroupbinding -->
[centos@ip-10-0-68-81 ~]$ kubectl describe targetgroupbinding -n prod-env --kubeconfig=$prod | grep Target
Kind: TargetGroupBinding
Target Group ARN: arn:aws:elasticloadbalancing:us-east-1:123456789098:targetgroup/k8s-prodenv-prodadmi-873264jwesa/87432kjfhkjds
Target Type: instance
And taken the arn of the Target Group, and used targetgroupbinding api to create the additional targetgroupbinding, which works for me;
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
name: k8s-uat-test-1
namespace: "uat-env"
labels:
k8s/environment: staging
spec:
serviceRef:
name: uat-test-service
port: 3002
targetGroupARN: arn:aws:elasticloadbalancing:us-east-1:123456789098:targetgroup/k8s-prodenv-prodadmi-873264jwesa/87432kjfhkjds
Solution 2:[2]
I had exactly the same issue, just newer cluster version (1.22). The problem was with security group configuration for managed nodes.
First sign of trouble made itself apparent while browsing logs for the aws-load-balancer-controller
pods
I noticed an error:
{"...","error":"expect exactly one securityGroup tagged with kubernetes.io/cluster/... for eni eni-0d46id..., got: [sg-081baacc1d925f936 sg-0a11d768e92737297]"}
This made me look at security group configuration. I noticed I attach the primary cluster group right to the nodes as well as their normal node-to-node group, which is excess.
Since I used terraform
for the deployment and the official aws-eks
module [1], I only had to remove this parameter from node group configuration:
attach_cluster_primary_security_group = true
Once deployed, and after rebooting the ingress controller, it automatically picked up the services and created necessary resources.
[1] https://github.com/terraform-aws-modules/terraform-aws-eks
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Mohamed Jawad |
Solution 2 | trust512 |