'Kafka connect dead letter queue and Elasticsearch sink connector problem

I will keep this as short as possible;

I have a kafka connect cluster using json serialization, we post one kafka connector with elasticsearchsinkconnector class to gather data from topics ignoring keys and schemas. We use confluent-5.5.0 and elasticsearchsinkconnector plugin version is also 5.5.0, but I also tried this at local with 11.x version of elasticsearchsinkconnector plugin.

What I am experiencing with the errors sink connector comes by and what messages it sends to dead letter queue is quite weird. If the error in hand is a serialization error, like If i try to send a simple string "this is a message" to my topic and give errors.tolerance: all with a dead letter queue topic defined It in fact sends the message to the dlq-topic, because it cannot serialize the message which comes from the producer, which is fine.

But lets say my elastic index has a field "number" and its mapping type is strictly integer. When I produce a log containing {"number": "this is not an integer"} message gets to the broker no problem, it gets consumed by the sinkconnector, but when it is time to actually index the document into the elasticsearch index, It throws a json parsing error because of the mapping configurations.

Kafka connector is tolerating errors so it keeps working just fine, but I see no messages in the dead letter queue, the json document {"number": "this is not an integer"} just disappears in space. Is there a way to get messages which got errors on client end to be written into the dead letter queue?

I tried this with elasticsearchsink connector plugin version 11.x as well.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source