'spark - scala script for wordcount not working Only one SparkContext should be running

I am trying to run a scala code in spark. The wordcount code was copied from internet. I have the hdfs running.

I keep having the same """Only one SparkContext should be running""" even if I stop de sc

I have also mounted the hdfs and pointed there the file value, but I have the same error.

Don´t know what else could do.

scala> sc.stop()

scala> :load /Users/dvegamar/spark_ej/wordcount.scala
Loading /Users/dvegamar/spark_ej/wordcount.scala...
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
defined object WordCount

scala> WordCount.main()
org.apache.spark.SparkException: Only one SparkContext should be running in this JVM (see SPARK-2243).The currently running SparkContext was created at:
org.apache.spark.SparkContext.<init>(SparkContext.scala:85)
WordCount$.main(/Users/dvegamar/spark_ej/wordcount.scala:72)
<init>(<console>:59)
<init>(<console>:63)
<init>(<console>:65)
<init>(<console>:67)
<init>(<console>:69)
<init>(<console>:71)
<init>(<console>:73)
<init>(<console>:75)
<init>(<console>:77)
<init>(<console>:79)
<init>(<console>:81)
<init>(<console>:83)
<init>(<console>:85)
<init>(<console>:87)
<init>(<console>:89)
<init>(<console>:91)
<init>(<console>:93)
<init>(<console>:95)
  at org.apache.spark.SparkContext$.$anonfun$assertNoOtherContextIsRunning$2(SparkContext.scala:2647)
  at scala.Option.foreach(Option.scala:407)
  at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2644)
  at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:2734)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:95)
  at WordCount$.main(/Users/dvegamar/spark_ej/wordcount.scala:83)
  ... 63 elided

The scala code is:

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf


object WordCount {
def main() {  
  
   
    //Configuration for a Spark application.        
    val conf = new SparkConf()  
    conf.setAppName("SparkWordCount").setMaster("local")
    conf.set("spark.driver.allowMultipleContexts", "true")
  
    //Create Spark Context  
    val sc = new SparkContext(conf)  
  
    //Create MappedRDD by reading from HDFS file from path command line parameter  
    val rdd = sc.textFile("file:///Users/dvegamar/spark_ej/texto_ejemplo.txt")  
  
    //WordCount  
    rdd.flatMap(_.split(" ")).
    map((_, 1)).
    reduceByKey(_ + _).
    map(x => (x._2, x._1)).
    sortByKey(false).
    map(x => (x._2, x._1)).
    saveAsTextFile("SparkWordCountResult")  
  
    //stop context, 
    sc.stop  
  }  
}


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source