2017-04-13 78 views
0

我正在尝试应用MLlib中提供的ALS矩阵分解。下面是我的代码将Spark DataFrame写入csv中的错误pyspark

from pyspark.sql.types import StringType 
from pyspark import SQLContext 
sqlContext = SQLContext(sc) 

t1 =   
sqlContext.read.csv("/user/hadoop/personalization/test1.csv",header=False) 

from pyspark.mllib.recommendation\ 
import ALS,MatrixFactorizationModel, Rating 

model=ALS.train(t1,rank=2,iterations=20,seed=0) 

products_for_users = model.recommendProductsForUsers(2).collect() 


l2=sqlContext.createDataFrame(products_for_users) 
l2.show() 
l2.write.csv('l2.csv') 

在最终的步骤,执行write.csv()后,我收到以下错误:是否有人可以找出错误的根源

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 674, in csv 
    self._jwrite.csv(path) 
    File "/usr/lib/spark/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", 
lin                       
e 933, in __call__ 
    File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco 
    return f(*a, **kw) 
    File "/usr/lib/spark/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", 
line 31                       
2, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o140.csv. 
: java.lang.UnsupportedOperationException: CSV data source does not support struct<_1:struct<user:bigint,product:bigint,rating:double>,_2:struct<user:bigint,pro                      duct:bigint,rating:double>> data type. 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun                      $verifySchema$1.apply(CSVFileFormat.scala:186) 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun                      $verifySchema$1.apply(CSVFileFormat.scala:183) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:893) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) 
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) 
    at org.apache.spark.sql.types.StructType.foreach(StructType.scala:95) 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.verifySc                      hema(CSVFileFormat.scala:183) 
    at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.prepareW                      rite(CSVFileFormat.scala:87) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1$$anonfun$4.apply(InsertIntoHadoopFsRelationCommand.scala:                      121) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1$$anonfun$4.apply(InsertIntoHadoopFsRelationCommand.scala:                      121) 
    at org.apache.spark.sql.execution.datasources.BaseWriterContainer.driver                      SideSetup(WriterContainer.scala:105) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:140) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLEx                      ecution.scala:57) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation                      Command.run(InsertIntoHadoopFsRelationCommand.scala:115) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffect                      Result$lzycompute(commands.scala:60) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffect                      Result(commands.scala:58) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(                      commands.scala:74) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(Spa                      rkPlan.scala:115) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(Spa                      rkPlan.scala:115) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.appl                      y(SparkPlan.scala:136) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.s                      cala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala                      :133) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryE                      xecution.scala:86) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.sc                      ala:86) 
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSourc                      e.scala:487) 
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211) 
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194) 
    at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:551) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.                      java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces                      sorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 
    at py4j.Gateway.invoke(Gateway.java:280) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:211) 
    at java.lang.Thread.run(Thread.java:745) 
+0

我相信l2'具有包含一个复杂类型列/ s的数据框',请你能张贴l2.show的'()的输出' – ImDarrenG

+0

+ --- + -------- ------------ + | _1 | _2 | + --- + -------------------- + | 1 | [[1,1,4.076836144 ... | | 2 | [[2,6,4.933567648 ... | | 3 | [[3,7,19.06817406 ... | + --- + -------------------- + –

回答

0

我在将数据框以可能性写入CSV时获得类似的错误。你可能想尝试下面,这对我有效。

l2.toPandas().to_csv('l2.csv') 
+0

这是一种解决方法,而不是解决方案 – thecheech