'what is the mapper and reducer that are used in computeSVD() function?

i am new to Map reduce and i want to do some research to compute svd using mapreduce.

  • the code side : i have found computeSVD a pyspark function and it uses mapreduce as said in this discussion .
  • the theory side : what is the mapper and reducer that are used in computeSVD() function ?

my code

findspark.init('C:\spark\spark-3.0.3-bin-hadoop2.7')
conf=SparkConf()
conf.setMaster("local[*]")
conf.setAppName('firstapp')

sc = SparkContext(conf=conf)
spark = SparkSession(sc)
rows = np.loadtxt('data.txt', dtype=float) # data.txt is a (m rows x n cols) matrix m>n
rows = sc.parallelize(rows)
mat = RowMatrix(rows)
svd = mat.computeSVD(5, computeU=True)

i would highely appriciate any help.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source