'Labels from nodes to daemonset/kube-prometheus-exporter-node

prometheus-operator includes DaemonSet to deploy node-exporter to every node in cluster. It works, but we lack some useful label information, for example:

# kubectl get nodes --all-namespaces --show-labels=true
NAME                            STATUS    ROLES     AGE       VERSION    LABELS
ip-1   Ready     master    2d        v1.10.12   ...,kubernetes.io/role=master,...
=
ip-2   Ready     node      2d        v1.10.12   ...,kubernetes.io/role=node,...

So we have useful info in labels - ip-1 is master, ip-2 is worker etc.

BUT this information is lost on node-exporter targets, because node labels are not propagated to daemonset node-exporter pods.

So in prometheus, I can't group nodes by their type, for example.

Maybe there's a way how to achieve this? Thanks!



Solution 1:[1]

It seems that you need to use relabel_config

This is an example: https://www.robustperception.io/automatically-monitoring-ec2-instances

P.S. As for roles specific, you can find usefull this post as well: How to have labels for machine roles

Update: To get other node details, not available from metadata, sidecar or init container can be used, for example: init container for node properties example.

Also, it is open issue to make node labels available for pod: 40610

Solution 2:[2]

The reason that node-exporter metrics in prometheus lack node information but have pod information is because prometheus-operator provides a ServiceMonitor for node-exporter, which sets up a scrape configuration with a kubernetes_sd_config.role of endpoints, this only gets the __meta_kubernetes_endpoint and __meta_kubernetes_pod metalabels.

We want to use a scrape configuration with a kubernetes_sd_config.role of node. To do this the node-exporter scrape endpoint must be reachable on the node address, the provided daemonset for node-exporter already exposes port 9100 on the node.

The next step is to give prometheus-operator an additional scrape config to discover node-exporter via the nodes rather than the service and copy over any of the node labels we might want:

- job_name: node-exporter
  relabel_configs:
  - source_labels: [__address__]
    action: replace
    regex: ([^:]+):.*
    replacement: $1:9100
    target_label: __address__
  - source_labels: [__meta_kubernetes_node_name]
    target_label: name
  - source_labels: [__meta_kubernetes_node_label_beta_kubernetes_io_arch]
    target_label: arch
  - source_labels: [__meta_kubernetes_node_label_beta_kubernetes_io_instance_type]
    target_label: instance_type
  - source_labels: [__meta_kubernetes_node_label_kubernetes_io_os]
    target_label: os
  - source_labels: [__meta_kubernetes_node_label_topology_kubernetes_io_region]
    target_label: region
  - source_labels: [__meta_kubernetes_node_label_topology_kubernetes_io_zone]
    target_label: zone
  - source_labels: [__meta_kubernetes_node_label_dedicated] # or any other custom label
    target_label: dedicated
  kubernetes_sd_configs:
  - role: node

After this you can delete node-exporter's ServiceMonitor.

The __address__ relabel is important, thanks to How to configure prometheus kubernetes_sd_configs to specify a specific host port?

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Bracken