'alertmanager getting randomly error message unexpected status code 422

I have deployed prometheus from community-helm chart(14.6.0) where is running alertmanager which is showing time-to-time errors (templating issues) with error message showing nothing extra useful. Question is that i have retested config via amtool and didnt receive error in config

level=error ts=2021-08-17T14:43:08.787Z caller=dispatch.go:309 component=dispatcher msg="Notify for alerts failed" num_alerts=2 err="opsgenie/opsgenie[0]: notify retry canceled due to unrecoverable error after 1 attempts: unexpected status code 422: {\"message\":\"Request body is not processable. Please check the errors.\",\"errors\":{\"message\":\"Message can not be empty.\"},\"took\":0.0,\"requestId\":\"38c37c18-5635-48bc-bb69-bda03e232cce\"}"
level=debug ts=2021-08-17T14:43:08.798Z caller=notify.go:685 component=dispatcher receiver=opsgenie integration=opsgenie[0] msg="Notify success" attempts=1
level=error ts=2021-08-17T14:43:08.804Z caller=dispatch.go:309 component=dispatcher msg="Notify for alerts failed" num_alerts=2 err="opsgenie/opsgenie[0]: notify retry canceled due to unrecoverable error after 1 attempts: unexpected status code 422: {\"message\":\"Request body is not processable. Please check the errors.\",\"errors\":{\"message\":\"Message can not be empty.\"},\"took\":0.001,\"requestId\":\"70d2ac84-3422-4fe6-9d8b-e601fdc37b25\"}"

Monitoring is working and getting alerts just would like to understand how this error can be translated .. what could be wrong as enabling debug mode didnt provide more information.

alertmanager config:

global: {}
receivers:
- name: opsgenie
  opsgenie_configs:
  - api_key: XXX
    api_url: https://api.eu.opsgenie.com/
    details:
      Prometheus alert: ' {{ .CommonLabels.alertname }}, {{ .CommonLabels.namespace }}, {{ .CommonLabels.pod }}, {{ .CommonLabels.dimension_CacheClusterId }}, {{ .CommonLabels.dimension_DBInstanceIdentifier }}, {{ .CommonLabels.dimension_DBClusterIdentifier }}'
    http_config: {}
    message: '{{ .CommonAnnotations.message }}'
    priority: '{{ if eq .CommonLabels.severity "critical" }}P2{{ else if eq .CommonLabels.severity "high" }}P3{{ else if eq .CommonLabels.severity "warning" }}P4{{ else }}P5{{ end }}'
    send_resolved: true
    tags: ' Prometheus, {{ .CommonLabels.namespace }}, {{ .CommonLabels.severity }}, {{ .CommonLabels.alertname }}, {{ .CommonLabels.pod }}, {{ .CommonLabels.kubernetes_node }}, {{ .CommonLabels.dimension_CacheClusterId }}, {{ .CommonLabels.dimension_DBInstanceIdentifier }}, {{ .CommonLabels.dimension_Cluster_Name }}, {{ .CommonLabels.dimension_DBClusterIdentifier }} '
- name: deadmansswitch
  webhook_configs:
  - http_config:
      basic_auth:
        password: XXX
    send_resolved: true
    url: https://api.eu.opsgenie.com/v2/heartbeats/prometheus-nonprod/ping
- name: blackhole
route:
  group_by:
  - alertname
  - namespace
  - kubernetes_node
  - dimension_CacheClusterId
  - dimension_DBInstanceIdentifier
  - dimension_Cluster_Name
  - dimension_DBClusterIdentifier
  - server_name
  group_interval: 5m
  group_wait: 10s
  receiver: opsgenie
  repeat_interval: 5m
  routes:
  - group_interval: 1m
    match:
      alertname: DeadMansSwitch
    receiver: deadmansswitch
    repeat_interval: 1m
  - match_re:
      namespace: XXX
  - match_re:
      alertname: HighMemoryUsage|HighCPULoad|CPUThrottlingHigh
  - match_re:
      namespace: .+
    receiver: blackhole
  - group_by:
    - instance
    match:
      alertname: PrometheusBlackboxEndpoints
  - match_re:
      alertname: .*
  - match_re:
      kubernetes_node: .*
  - match_re:
      dimension_CacheClusterId: .*
  - match_re:
      dimension_DBInstanceIdentifier: .*
  - match_re:
      dimension_Cluster_Name: .*
  - match_re:


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source