'How to execute hql script with transform python udf in spark?
I am new to spark and learning thru POC. As part of this POC I am trying to execute hql file directly which has transform keyword to use python udf.
I have tested hql script in CLI "hive -f filename.hql" and it is working fine. Same script I have tried in spark-sql but it is failing with hdfs path not found error. I tried to give hdfs path in different way as below but all are not working
"/test/scripts/test.hql"
"hdfs://test.net:8020/test/scripts/test.hql"
"hdfs:///test.net:8020/test/scripts/test.hql"
Also tried giving complete path in hive transform code as below
USING "scl enable python27 'python hdfs://test.net:8020/user/test/scripts/TestPython.py'"
Hive Code
add file hdfs://test.net:8020/user/test/scripts/TestPython.py;
select * from
(select transform (*)
USING "scl enable python27 'python TestPython.py'"
as (Col_1 STRING,
col_2 STRING,
...
..
col_125 STRING
)
FROM
test.transform_inner_temp1 a) b;
TestPython code:
#!/usr/bin/env python
'''
Created on June 2, 2017
@author: test
'''
import sys
from datetime import datetime
import decimal
import string
D = decimal.Decimal
for line in sys.stdin:
line = sys.stdin.readline()
TempList = line.strip().split('\t')
col_1 = TempList[0]
...
....
col_125 = TempList[34] + TempList[32]
outList.extend((col_1,....col_125))
outValue = "\t".join(map(str,outList))
print "%s"%(outValue)
So I have tried another method as executing directly in spark-submit
spark-submit --master yarn-cluster hdfs://test.net:8020/user/test/scripts/testspark.py
testspark.py
from pyspark.sql.types import StringType
from pyspark import SparkConf, SparkContext
from pyspark import SQLContext
conf = SparkConf().setAppName("gveeran pyspark test")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
with open("hdfs://test.net:8020/user/test/scripts/test.hql") as fr:
query = fr.read()
results = sqlContext.sql(query)
results.show()
But again same issue as below
Traceback (most recent call last):
File "PySparkTest2.py", line 7, in <module>
with open("hdfs://test.net:8020/user/test/scripts/test.hql") as fr:
IOError: [Errno 2] No such file or directory: 'hdfs://test.net:8020/user/test/scripts/test.hql'
Solution 1:[1]
You can read the file as a query and then execute as spark sql job
Example:-
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
sc =SparkContext.getOrCreate()
sqlCtx = SQLContext(sc)
with open("/home/hadoop/test/abc.hql") as fr:
query = fr.read()
print(query)
results = sqlCtx.sql(query)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Hari_pb |
