'My K8S node doesn't have podCIDR in node's configuration?
I deployed K8S cluster on AWS EKS (nodegroup) with 3 nodes. I'd like to see the pod CIDR for each node but this command returns empty: $ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'. Why doesn't it have CIDR in the configuration?
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-1-193.ap-southeast-2.compute.internal Ready <none> 94d v1.21.5-eks-bc4871b
ip-10-0-2-66.ap-southeast-2.compute.internal Ready <none> 22m v1.21.5-eks-bc4871b
ip-10-0-2-96.ap-southeast-2.compute.internal Ready <none> 24m v1.21.5-eks-bc4871b
Below is one of the node info.
$ kubectl describe node ip-10-0-1-193.ap-southeast-2.compute.internal
Name: ip-10-0-1-193.ap-southeast-2.compute.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.large
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=elk
eks.amazonaws.com/nodegroup-image=ami-00c56588b2d911d26
failure-domain.beta.kubernetes.io/region=ap-southeast-2
failure-domain.beta.kubernetes.io/zone=ap-southeast-2a
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-0-1-193.ap-southeast-2.compute.internal
kubernetes.io/os=linux
node.kubernetes.io/instance-type=t3.large
topology.ebs.csi.aws.com/zone=ap-southeast-2a
topology.kubernetes.io/region=ap-southeast-2
topology.kubernetes.io/zone=ap-southeast-2a
Annotations: csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0da5d02f6c203fe6b"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 19 Nov 2021 16:04:37 +1100
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ip-10-0-1-193.ap-southeast-2.compute.internal
AcquireTime: <unset>
RenewTime: Mon, 21 Feb 2022 20:39:23 +1100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.1.193
Hostname: ip-10-0-1-193.ap-southeast-2.compute.internal
InternalDNS: ip-10-0-1-193.ap-southeast-2.compute.internal
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 20959212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8047100Ki
pods: 35
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 1930m
ephemeral-storage: 18242267924
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7289340Ki
pods: 35
System Info:
Machine ID: ec29e60ae2a5ed86515b0b6e7fe39341
System UUID: ec29e60a-e2a5-ed86-515b-0b6e7fe39341
Boot ID: f75bc84f-fbd5-4414-87c8-669a8b4e3c62
Kernel Version: 5.4.149-73.259.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.5-eks-bc4871b
Kube-Proxy Version: v1.21.5-eks-bc4871b
ProviderID: aws:///ap-southeast-2a/i-0da5d02f6c203fe6b
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
cert-manager cert-manager-68ff46b886-ndnm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
cert-manager cert-manager-cainjector-7cdbb9c945-bzfx2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
cert-manager cert-manager-webhook-58d45d56b8-2mr76 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
default elk-es-node-1 1 (51%) 100m (5%) 4Gi (57%) 50Mi (0%) 32m
default my-nginx-5b56ccd65f-sndqv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
elastic-system elastic-operator-0 100m (5%) 1 (51%) 150Mi (2%) 512Mi (7%) 89d
kube-system aws-load-balancer-controller-9c59c86d8-86ld2 100m (5%) 200m (10%) 200Mi (2%) 500Mi (7%) 89d
kube-system aws-node-mhqp6 10m (0%) 0 (0%) 0 (0%) 0 (0%) 94d
kube-system cluster-autoscaler-76fd4db4c-j59vm 100m (5%) 100m (5%) 600Mi (8%) 600Mi (8%) 89d
kube-system coredns-68f7974869-2x4qc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 89d
kube-system coredns-68f7974869-wfhzq 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 89d
kube-system ebs-csi-controller-7584b68c57-ksvkc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
kube-system ebs-csi-controller-7584b68c57-rkbq4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
kube-system ebs-csi-node-nxfkz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94d
kube-system kube-proxy-zcqg4 100m (5%) 0 (0%) 0 (0%) 0 (0%) 94d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1610m (83%) 1400m (72%)
memory 5186Mi (72%) 2002Mi (28%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events: <none>
Solution 1:[1]
From kubelet let documentation I can see that it is only being used for standalone configuration https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Vit |

