2017-06-01 81 views
1

我试图连接两个dataframes,这看起来像:连接两个dataframes pyspark

df1: 

+---+---+ 
| a| b| 
+---+---+ 
| a| b| 
| 1| 2| 
+---+---+ 
only showing top 2 rows 

df2: 

+---+---+ 
| c| d| 
+---+---+ 
| c| d| 
| 7| 8| 
+---+---+ 
only showing top 2 rows 

他们都有相同的行数,我想这样做:

+---+---+---+---+     
| a| b| c| d|    
+---+---+---+---+   
| a| b| c| d|   
| 1| 2| 7| 8|  
+---+---+---+---+ 

我想:

df1=df1.withColumn('c', df2.c).collect() 

df1=df1.withColumn('d', df2.d).collect() 

但没有成功,给了我这个错误:

Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
    format(target_id, ".", name), value) 
py4j.protocol.Py4JJavaError: An error occurred while calling o2804.withColumn. 

有没有办法呢?

感谢

+0

ROWNUMBER()会加入这样的方式。 – Suresh

+0

我是新来的pyspark,我不知道该怎么做 – abdelkarim

+0

你试过[这](https://stackoverflow.com/questions/37332434/concatenate-two-pyspark-dataframes)? – ChatterOne

回答

1

这里是例子@Suresh建议,加列ROWNUMBER

from pyspark.sql import functions as F 
df1 = sqlctx.createDataFrame([('a','b'),('1','2')],['a','b']).withColumn("row_number", F.row_number().over(Window.partitionBy().orderBy("a"))) 
df2 = sqlctx.createDataFrame([('c','d'),('7','8')],['c','d']).withColumn("row_number", F.row_number().over(Window.partitionBy().orderBy("c"))) 

df3=df1.join(df2,df1.row_number==df2.row_number,'inner')\ 
         .select(df1.a,df1.b,df2.c,df2.d) 

df3=df1.join(df2,df1.row_number==df2.row_number,'inner').select(df1.a,df1.b,df2.c,df2.d) 
df3.show() 
+0

有没有办法做到这一点,而不改变行的顺序? – gannawag