'AKS - Terraform's network_profile block

When creating an aks cluster using terraform and azurerm provider you can specify this block :

  network_profile {
    network_plugin     = var.network_plugin
    network_policy     = var.network_policy
    load_balancer_sku  = "Standard"
    docker_bridge_cidr = var.docker_bridge_cidr
    service_cidr       = var.service_cidr
    dns_service_ip     = var.dns_service_ip
  }

I've read this page (and many more!) a few times but I still don't quite understand what it means.

  • network_pluggin : Kubenet vs Azure CNI; why to use one over another ? I understood that kubenet allowed having less chance of ip exhaustion than Azure CNI, byt Azure CNI is recommanded when enable AAD pod identity - am I right?
  • network_policy: this one I think is the way one can manage the internal k8s's network policies
  • load_balancer_sku : this one is clear to me; no problem
  • docker_bridge_cidr: I think this isn't really used by azure and is more like some legacy stuff, docker requiring to be configured on the worker nodes.
  • service_cidr : I have no idea what the doc means by "The Network Range used by the Kubernetes service. Changing this forces a new resource to be created."
  • dns_service_ip : as above I'm not really sure

Also, when I provide my default_node_pool a vnet_subnet_id to live in, it populates the given subnet with 31 Scale set instance tho I've only given my cluster a min_count of 1 and max_count of 2 and the vnet_subnet_id is a /24 (251 free IP). Where that 31 instances come from ?

enter image description here



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source