2016-12-01 31 views
0

我有一个存储在s3上的pyspark文件。我正在尝试使用Spark REST API来运行它。Spark REST API:无法找到数据源:com.databricks.spark.csv

我运行下面的命令:

curl -X POST http://<ip-address>:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{ 
"action" : "CreateSubmissionRequest", 
"appArgs" : [ "testing.py"], 
"appResource" : "s3n://accessKey:secretKey/<bucket-name>/testing.py", 
"clientSparkVersion" : "1.6.1", 
"environmentVariables" : { 
    "SPARK_ENV_LOADED" : "1" 
}, 
"mainClass" : "org.apache.spark.deploy.SparkSubmit", 
"sparkProperties" : { 
"spark.driver.supervise" : "false", 
"spark.app.name" : "Simple App", 
"spark.eventLog.enabled": "true", 
"spark.submit.deployMode" : "cluster", 
"spark.master" : "spark://<ip-address>:6066", 
"spark.jars" : "spark-csv_2.10-1.4.0.jar", 
"spark.jars.packages" : "com.databricks:spark-csv_2.10:1.4.0" 
} 
}' 

和testing.py文件有一个代码片段:

myContext = SQLContext(sc) 
format = "com.databricks.spark.csv" 
dataFrame1 = myContext.read.format(format).option("header", "true").option("inferSchema", "true").option("delimiter",",").load(location1).repartition(1) 
dataFrame2 = myContext.read.format(format).option("header", "true").option("inferSchema", "true").option("delimiter",",").load(location2).repartition(1) 
outDataFrame = dataFrame1.join(dataFrame2, dataFrame1.values == dataFrame2.valuesId) 
outDataFrame.write.format(format).option("header", "true").option("nullValue","").save(outLocation) 

但在这条线:

dataFrame1 = myContext.read.format(format).option("header", "true").option("inferSchema", "true").option("delimiter",",").load(location1).repartition(1) 

我得到例外:

java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org 
Caused by: java.lang.ClassNotFoundException: com.databricks.spark.csv.DefaultSource 

我尝试不同的东西出来,其中的一件事情是,我登录到IP地址的机器并运行此命令:

./bin/spark-shell --packages com.databricks:spark-csv_2.10:1.4.0 

,这样它会下载火花CSV在.ivy2 /缓存文件夹。但是这并没有解决问题。我究竟做错了什么?

+0

嗨Shashi,下面的答案有一个问题 - 你能回答吗?谢谢。 – halfer

回答

1

(发表于OP)

我首先在驱动程序和工作机器上添加了spark-csv_2.10-1.4.0.jar。并补充

"spark.driver.extraClassPath" : "absolute/path/to/spark-csv_2.10-1.4.0.jar", 
"spark.executor.extraClassPath" : "absolute/path/to/spark-csv_2.10-1.4.0.jar", 

然后我得到了以下错误:

java.lang.NoClassDefFoundError: org/apache/commons/csv/CSVFormat 
Caused by: java.lang.ClassNotFoundException: org.apache.commons.csv.CSVFormat 

然后我添加的commons-CSV-1.4.jar在两台机器上并补充说:

"spark.driver.extraClassPath" : "/absolute/path/to/spark-csv_2.10-1.4.0.jar:/absolute/path/to/commons-csv-1.4.jar", 
"spark.executor.extraClassPath" : "/absolute/path/to/spark-csv_2.10-1.4.0.jar:/absolute/path/to/commons-csv-1.4.jar", 

这解决了我问题。

+0

我正在使用[spark-jobserver],我面临类似的问题。我为插件添加了'libraryDependencies + =“com.databricks”%“spark-csv_2.10”%“1.4.0” libraryDependencies + =“org.apache.commons”%“commons-csv”%“1.4”'。 sbt文件但它没有帮助。任何其他投入?谢谢 – Nagesh

+0

我相信这个链接应该是有帮助的:https://github.com/spark-jobserver/spark-jobserver#dependency-jars 如果您选择使用从属jar-uris,则必须在驱动程序机器上单独上传jar包(我不确定是否需要在工作机上上传) 我从来没有尝试在使用作业服务器时自己添加依赖关系,但这应该起作用 –

+0

@Nagesh,请参阅Shashi K.的评论。 – halfer