2017-04-08 72 views
0

我有一个RDD数据和数据如下:如何在Spark中分割?

scala> c_data 
res31: org.apache.spark.rdd.RDD[String] = /home/t_csv MapPartitionsRDD[26] at textFile at <console>:25 

scala> c_data.count() 
res29: Long = 45212                

scala> c_data.take(2).foreach(println) 
age;job;marital;education;default;balance;housing;loan;contact;day;month;duration;campaign;pdays;previous;poutcome;y 
58;management;married;tertiary;no;2143;yes;no;unknown;5;may;261;1;-1;0;unknown;no 

我希望将数据分割成另一个RDD,我使用:

scala> val csv_data = c_data.map{x=> 
| val w = x.split(";") 
| val age = w(0) 
| val job = w(1) 
| val marital_stat = w(2) 
| val education = w(3) 
| val default = w(4) 
| val balance = w(5) 
| val housing = w(6) 
| val loan = w(7) 
| val contact = w(8) 
| val day = w(9) 
| val month = w(10) 
| val duration = w(11) 
| val campaign = w(12) 
| val pdays = w(13) 
| val previous = w(14) 
| val poutcome = w(15) 
| val Y = w(16) 
| } 

返回:

csv_data: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[28] at map at <console>:27 

当我查询csv_data时,它返回Array((),...)。 如何获取第一行的数据作为标题和其余数据? 我在哪里做错了?

在此先感谢。

+0

你的map函数没有返回单元返回类型的任何东西。 –

+0

是的,我明白了。谢谢。 – Arvind

回答

1

你的映射函数返回Unit,所以你映射到一个RDD[Unit]。你可以通过改变你的代码来获得你的值的元组

val csv_data = c_data.map{x=> 
    val w = x.split(";") 
    ... 
    val Y = w(16) 
    (w, age, job, marital_stat, education, default, balance, housing, loan, contact, day, month, duration, campaign, pdays, previous, poutcome, Y) 
} 
+0

感谢您的解释。 – Arvind