0

我正在使用Cloudera quickstart VM来测试一些pyspark的工作。对于一项任务,我需要添加spark-csv包。这里是我做过什么:在Cloudera VM中添加spark-csv软件包的问题

PYSPARK_DRIVER_PYTHON=ipython pyspark -- packages com.databricks:spark-csv_2.10:1.3.0 

pyspark开始了罚款,但我没有得到警告,如:

**16/02/09 17:41:22 WARN util.Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 10.0.2.15 instead (on interface eth0) 
16/02/09 17:41:22 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address 
16/02/09 17:41:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable** 

然后我跑我的pyspark代码:

yelp_df = sqlCtx.load( 
    source="com.databricks.spark.csv", 


    header = 'true', 


    inferSchema = 'true', 


    path = 'file:///directory/file.csv') 

但我我收到错误信息:

Py4JJavaError: An error occurred while calling o19.load.: java.lang.RuntimeException: Failed to load class for data source: com.databricks.spark.csv at scala.sys.package$.error(package.scala:27) 

会出现什么问题?在此先感谢您的帮助。

回答

0

试试这个

PYSPARK_DRIVER_PYTHON=ipython pyspark --packages com.databricks:spark-csv_2.10:1.3.0

没有空间,有一个错字。