'Error initializing SparkContext while running job on intelliJ

I've been trying to solve this problem more than a week. First it was like:

    Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/SparkSession$
        at fcr_104_106.JOB_CALCULATOR_104_106$.main(JOB_CALCULATOR_104_106.scala:25)
    at fcr_104_106.JOB_CALCULATOR_104_106.main(JOB_CALCULATOR_104_106.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.SparkSession$

After editing configuration and adding dependencies with "provided" next error was this:

22/04/20 11:48:56 INFO SparkContext: Running Spark version 2.3.2
22/04/20 11:48:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/04/20 11:48:56 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration

I also tried different versions of hadoop and jdk's, change environmental variable, but nothing worked. I don't now what else can i try. The code should work without a doubt but on my pc it doesn't. Also here is my built.sbt(I also tried to change "provided" to "compile", but it did nothing.

val sparkVersion = "2.3.2"

lazy val root = (project in file(".")).
  settings(
    inThisBuild(List(
      organization := "TSC",
      scalaVersion := "2.11.12",
      version      := "0.1.0-SNAPSHOT"
      )),
    name := "kantor_fcr",
    assemblyOutputPath in assembly := file("lib/kantor_fcr.jar"),
    libraryDependencies ++= Seq(
      "org.apache.spark" %% "spark-core" % sparkVersion %Provided,
      "org.apache.spark" %% "spark-sql" % sparkVersion %Provided,
      "org.apache.spark" %% "spark-hive" % sparkVersion %Provided,
      "org.apache.spark" %% "spark-yarn" % sparkVersion %Provided
      )
    )

Thanks everyone!



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source