2016-07-22 198 views
4

我完全迷失在有线情况下。现在我有一个列表li从pyspark.sql的列表中创建一个数据框

li = example_data.map(lambda x: get_labeled_prediction(w,x)).collect() 
print li, type(li) 

,输出类似,

[(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)] <type 'list'> 

当我尝试从这个名单

m = sqlContext.createDataFrame(l, ["prediction", "label"]) 

它扔了错误信息创建一个数据帧

TypeError         Traceback (most recent call last) 
<ipython-input-90-4a49f7f67700> in <module>() 
56 l = example_data.map(lambda x: get_labeled_prediction(w,x)).collect() 
57 print l, type(l) 
---> 58 m = sqlContext.createDataFrame(l, ["prediction", "label"]) 
59 ''' 
60 g = example_data.map(lambda x:gradient_summand(w, x)).sum() 

/databricks/spark/python/pyspark/sql/context.py in createDataFrame(self, data, schema, samplingRatio) 
423    rdd, schema = self._createFromRDD(data, schema, samplingRatio) 
424   else: 
--> 425    rdd, schema = self._createFromLocal(data, schema) 
426   jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd()) 
427   jdf = self._ssql_ctx.applySchemaToPythonRDD(jrdd.rdd(), schema.json()) 

/databricks/spark/python/pyspark/sql/context.py in _createFromLocal(self, data, schema) 
339 
340   if schema is None or isinstance(schema, (list, tuple)): 
--> 341    struct = self._inferSchemaFromList(data) 
342    if isinstance(schema, (list, tuple)): 
343     for i, name in enumerate(schema): 

/databricks/spark/python/pyspark/sql/context.py in _inferSchemaFromList(self, data) 
239    warnings.warn("inferring schema from dict is deprecated," 
240       "please use pyspark.sql.Row instead") 
--> 241   schema = reduce(_merge_type, map(_infer_schema, data)) 
242   if _has_nulltype(schema): 
243    raise ValueError("Some of types cannot be determined after inferring") 

/databricks/spark/python/pyspark/sql/types.py in _infer_schema(row) 
831   raise TypeError("Can not infer schema for type: %s" % type(row)) 
832 
--> 833  fields = [StructField(k, _infer_type(v), True) for k, v in items] 
834  return StructType(fields) 
835 

/databricks/spark/python/pyspark/sql/types.py in _infer_type(obj) 
808    return _infer_schema(obj) 
809   except TypeError: 
--> 810    raise TypeError("not supported type: %s" % type(obj)) 
811 
812 

TypeError: not supported type: <type 'numpy.float64'> 

但是,当我硬编码该列表中的行

tt = sqlContext.createDataFrame([(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)], ["prediction", "label"]) 
tt.collect() 

它运作良好。

[Row(prediction=0.0, label=59.0), 
Row(prediction=0.0, label=51.0), 
Row(prediction=0.0, label=81.0), 
Row(prediction=0.0, label=8.0), 
Row(prediction=0.0, label=86.0), 
Row(prediction=0.0, label=86.0), 
Row(prediction=0.0, label=60.0), 
Row(prediction=0.0, label=54.0), 
Row(prediction=0.0, label=54.0), 
Row(prediction=0.0, label=84.0)] 

是什么引起了这个问题,以及如何解决它?任何暗示将不胜感激。

回答

4

你有一个list of float64,我认为它不喜欢那种类型。另一方面,当你硬编码时,它只是一个list of float
这是一个question与回答如何从numpy的数据类型转换为python的本地答案。

+0

谢谢,limbo。这正是我正在寻找的。 –

+0

我遵循你的建议答案,但它不适合我。我得到了TypeError:不支持的类型:' –