这是一个略显混乱的问题。如果您的数据已经在Array[(String, Int)]
集合中(可能在驱动程序的collect()
之后),那么您无需使用任何RDD
转换。事实上,还有你可以用fold*()
跑过来集合抢平均一个漂亮的窍门:
val average = arr.foldLeft(0.0) { case (sum: Double, (_, count: Int)) => sum + count }/arr.foldLeft(0.0) { case (sum: Double, (word: String, count: Int)) => sum + count/word.length }
的长篇大论类,但它本质上聚集在分子的总字符数,字数在数分母。在你的例子来看,我看到以下内容:
scala> val arr = Array(("I",1), ("have",4), ("a",1), ("cat",6), ("The", 3), ("looks", 5), ("very" ,4), ("cute",4))
arr: Array[(String, Int)] = Array((I,1), (have,4), (a,1), (cat,6), (The,3), (looks,5), (very,4), (cute,4))
scala> val average = ...
average: Double = 3.111111111111111
如果您有跨RDD[(String, Int)]
分布式您(String, Int)
元组,你可以使用accumulators来很容易地解决这个问题:
val chars = sc.accumulator(0.0)
val words = sc.accumulator(0.0)
wordsRDD.foreach { case (word: String, count: Int) =>
chars += count; words += count/word.length
}
val average = chars.value/words.value
当在运行例如(放置在RDD
)上面,我看到以下内容:
scala> val arr = Array(("I",1), ("have",4), ("a",1), ("cat",6), ("The", 3), ("looks", 5), ("very" ,4), ("cute",4))
arr: Array[(String, Int)] = Array((I,1), (have,4), (a,1), (cat,6), (The,3), (looks,5), (very,4), (cute,4))
scala> val wordsRDD = sc.parallelize(arr)
wordsRDD: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:14
scala> val chars = sc.accumulator(0.0)
chars: org.apache.spark.Accumulator[Double] = 0.0
scala> val words = sc.accumulator(0.0)
words: org.apache.spark.Accumulator[Double] = 0.0
scala> wordsRDD.foreach { case (word: String, count: Int) =>
| chars += count; words += count/word.length
| }
...
scala> val average = chars.value/words.value
average: Double = 3.111111111111111
我在寻找每个单词的平均长度(而不是在整个文本的水平),即如果一个单词出现的次数越多,我需要得到更多的单词的平均长度。例如,在我的段落中的单词猫出现了两次,从而,该单词的平均长度为6/3 = 2换句话说,如“该”,平均长度为3/3 = 1 – VRK