I'm using kafka to send the multiple collections data into a single topic using mongo source connector and upsert the data into different oracle tables using jb
Background Our software solution collects data ("events") per customer. Some of the customers (a small fraction ~3%) ask to get this data into "their systems" (
I'm new to Kafka and I would like to know why there are specific Database connectors like Redshift Sink Connector and why we should not go for generic JDBC sink
We have a need to push the kafka topic JSON records to postGresSql database. the JSON are compliant to https://json-schema.org/draft-07/json-schema-release-note
I am setting up Strimzi kafka Mirrormaker2 in our test environment which receives on an average 100k messages/5 mins. we have around 25 topics and 900 partition
Setup I've installed latest (7.0.1) version of Confluent platform in standalone mode on Ubuntu virtual machine. Python producer for Avro format Using this sampl
I want to use the PLC4X Connector (https://www.confluent.io/hub/apache/kafka-connect-plc4x-plc4j) to connect OPC UA (Prosys Simulation Server) with Kafka. Howev
How does Kafka deal with multiple versions of the same connector plugin provided in the CLASSPATH? For example, let's say I put both mongo-kafka-1.0.0-all.jar a
We have a debezium source connectors working perfectly fine, and one of the properties set is, for example: "transforms.SetSchemaMetadata.schema.name": "myschem
I have enabled "store.kafka.keys" : "true", "store.kafka.headers" : "true", "keys.format.class" : "io.confluent.connect.s3.format.json.JsonFormat", "headers.for
I have multiple questions about the kafka connect S3 sink connector 1.I was wondering if its possible using the S3 sink of kafka connect to save records with mu
I have a kafka connect task which fetches data from a topic with 3 partitions and send the data to a cassandra sink, so I have kconnect in distributed mode with
Have installed confluent 6.2.0 in my 3 kafka nodes and also installed confluentinc-kafka-connect-s3-10.0.1 in 3 nodes and modified the quickstart-s3.properties
I am having a large cluster of Confluent Kafka comprising of multiple sub-clusters One for Zookeeper, another for Kafka broker with Schema Registry and KSQL str
Setup Multiple independent source systems push AVRO events into a Kafka topic. A Kafka S3 sink connector reads AVRO events from this topic and writes into S3 pa
load multiple postgresql tables into multiple kafka topics in google cloud environment using pubsub or kafka connect.
I have a topic that will eventually have lots of different schemas on it. For now it just has the one. I've created a connect job via REST like this: { "name"
I am using debezium oracle connector in kafka connect.While starting connector I am getting below error, java.lang.RuntimeException: Failed to resolve Oracle da
My pipeline is: Kerberized Kafka --> Logstash (hosted on a different server) --> Splunk. Can I replace the Logstash component with Kafka Connect? Could
I am using Debezium as a CDC tool to stream data from MySql. After installing Debezium MySQL connector to Confluent OSS cluster, I am trying to capture MySQL bi