'How to attain atomicity while publishing kafka messages

I have a scenario where I will be iterating through a set of records that will be fetched from database and after fetching I will be iterating through those records and will be pushing each record to kafka topic. Now let us assume I have retrieved 10 records and in the iteration I have pushed first 5 records and there is some exception on the 6th record I want to revert back the messages that are pushed into the topic. This is similar to database transactionality. Can we attain atomicity in kafka?

Thanks.



Solution 1:[1]

Yes, you can use transactions; the record(s) will remain in the log and kafka puts a marker in the log to indicate whether the transaction was committed or rolled back.

Consumers must use isolation.level=read_committed to avoid getting rolled back records.

https://docs.spring.io/spring-kafka/docs/2.8.4-SNAPSHOT/reference/html/#transactions

https://kafka.apache.org/documentation/#consumerconfigs_isolation.level

Solution 2:[2]

revert back the messages that are pushed into the topic

Once data is in the topic it cannot be modified; you'd need to delete the entire topic and restart from the very beginning

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Gary Russell
Solution 2 OneCricketeer