'Why does EKS say my fluent-bit.conf is not valid

I am trying to setup Fluent Bit for Kuberentes on EKS + Fargate. I was able to get logs all going to one general log group on Cloudwatch but now when I add fluent-bit.conf: | to the data: field and try to apply the update to my cluster, I get this error:

for: "fluentbit-config.yaml": admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: fluent-bit.conf is not valid. Please only provide output.conf, filters.conf or parsers.conf in the logging configmap

What sticks out the most to me is that the error message is asking me to only provide output, filter or parser configurations.

It matches up with other examples I found online, but it seems like I do not have the fluent-bit.conf file on the cluster that I am updating or something. The tutorials I have followed do not mention installing a file so I am lost as to why I am getting this error.

The

My fluentbit-config.yaml file looks like this

kind: Namespace
apiVersion: v1
metadata:
  name: aws-observability
  labels:
    aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
  labels:
    k8s-app: fluent-bit
data:
  fluent-bit.conf: |
    @INCLUDE input-kubernetes.conf
    
  input-kubernetes.conf: |
    [INPUT]
        Name tail
        Parser docker
        Tag logger
        Path /var/log/containers/*logger-server*.log
        
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        Match logger
        region us-east-1
        log_group_name fluent-bit-cloudwatch
        log_stream_prefix from-fluent-bit-
        auto_create_group On


Solution 1:[1]

I wonder if anyone managed to process the 'log' section with fargate 'hide-car' using parser as per fluentbit conf documentation. Here's a snippet of my aws-logging config map which pushes logs to both outputs but sadly the parsing is never happening.

I would like to avoid using hacky regexes when viewing logs in Opensearch which can be avoided with proper parsing of the 'logs'.

PS. I noticed fluentbit docs refer to so called 'docker' parser but fargate nodes are using containerd as the container runtime which could potentially be a problem?

data:
  filters.conf: |
    [FILTER]
        Name             kubernetes
        Match            kube.*
        Merge_Log           On
        Merge_Log_Key       log_proccessed
        Buffer_Size         0
        Kube_Meta_Cache_TTL 300s
        Parser docker
  flb_log_cw: 'true'
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        Match   *
        region eu-west-1
        log_group_name /aws/eks/bs-277-main/container
        log_stream_prefix log-
    [OUTPUT]
        Name  es
        Match *
        Host  vpc-my-amazing-os-endpoint.eu-west-1.es.amazonaws.com
        Port  443
        Index kubernetes
        Type  doc
        AWS_Auth On
        AWS_Region eu-west-1
        tls   On
  parsers.conf: |
    [PARSER]
        Name         docker
        Format       json
        Time_Key     time
        Time_Format  %Y-%m-%dT%H:%M:%S.%L
        Time_Keep    On

Came across this example of fluentbit config with containerd log parsing but it is based on adding Parser param to [INPUT] section which is ignored in Fargate as it is presumably managed by AWS.

It is very unfortunate that crucial component of observability such as fluentbit has so little documentation on AWS Fargate.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1