'Spark Structured Streaming terminates immediately with spark-submit

I am trying to set up an ingestion pipeline using Spark structured streaming to read from Kafka and write to a Delta Lake table. I currently have a basic POC that I am trying to get running, no transformations yet. When working in the spark-shell, everything seems to run fine:

spark-shell --master spark://HOST:7077 --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.1,io.delta:delta-core_2.12:1.1.0

Starting and writing the stream:

val source = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "http://HOST:9092").option("subscribe", "spark-kafka-test").option("startingOffsets", "earliest").load().writeStream.format("delta").option("checkpointLocation", "/tmp/delta/checkpoint").start("/tmp/delta/delta-test")

However, once I pack this in to a Scala application and spark-submit the class with the required packages in a sbt assembly jar to the standalone spark instance, the stream seems to stop immediately and does not process any messages in the topic. I simply get the following logs:

INFO SparkContext: Invoking stop() from shutdown hook
...
INFO SparkContext: Successfully stopped SparkContext
INFO MicroBatchExecution: Resuming at batch 0 with committed offsets {} and available offsets {KafkaV2[Subscribe[spark-kafka-test]]: {"spark-kafka-test":{"0":6}}}
INFO MicroBatchExecution: Stream started from {}
Process finished with exit code 0

Here is my Scala class:

import org.apache.spark.sql.SparkSession

object Consumer extends App  {

  val spark = SparkSession
    .builder()
    .appName("Spark Kafka Consumer")
    .master("spark://HOST:7077")
    //.master("local")
    .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
    .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
    .config("spark.executor.memory", "1g")
    .config("spark.executor.cores", "2")
    .config("spark.cores.max", "2")
    .getOrCreate()

  val source = spark.readStream.format("kafka")
    .option("kafka.bootstrap.servers", "http://HOST:9092")
    .option("subscribe", "spark-kafka-test")
    .option("startingOffsets", "earliest")
    .load()
    .writeStream
    .format("delta")
    .option("checkpointLocation", "/tmp/delta/checkpoint")
    .start("/tmp/delta/delta-test")
}

Here is my spark-submitcommand:

spark-submit --master spark://HOST:7077 --deploy-mode client --class Consumer --name Kafka-Delta-Consumer --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.1,io.delta:delta-core_2.12:1.1.0 <PATH-TO-JAR>/assembly.jar

Does anybody have an idea why the stream is closed and the program terminates? I am assuming memory is not a problem, as the whole Kafka topic is only a few bytes.


EDIT: From some further investigations, I found the following behavior: On my confluent hub interface, I see that starting the stream via the spark-shell registers a consumer and active consumption is visible in monitoring. On contrast, the spark-submit job is seemingly not able to register the consumer. On the driver logs, I found the following error:

WARN  org.apache.spark.sql.kafka010.KafkaOffsetReaderConsumer  - Error in attempt 1 getting Kafka offsets: 
java.lang.NullPointerException
    at org.apache.spark.kafka010.KafkaConfigUpdater.setAuthenticationConfigIfNeeded(KafkaConfigUpdater.scala:60)

In my case, I am working with one master and one worker on the same machine. There shouldn't be any networking differences between spark-shell and spark-submit executions, am I right?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source