我遇到一些非常奇怪的事情。我在不同的减速器中获得相同的钥匙。我只是打印并收集了关键和值。我的reducer代码如下所示。同样的钥匙在不同的减速器进来hadoop
public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
System.out.println("The key is "+ key.toString());
while(values.hasNext()){
Text value=values.next();
key.set("");
output.collect(key, value);
}
}
在控制台上的输出是
The key is 111-00-1234195967001
The key is 1234529857009
The key is 1234529857009
14/01/06 20:11:16 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
14/01/06 20:11:16 INFO mapred.LocalJobRunner:
14/01/06 20:11:16 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
14/01/06 20:11:16 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:54310/user/hduser/joboutput11
14/01/06 20:11:18 INFO mapred.LocalJobRunner: reduce > reduce
14/01/06 20:11:18 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
14/01/06 20:11:19 INFO mapred.JobClient: map 100% reduce 100%
14/01/06 20:11:19 INFO mapred.JobClient: Job complete: job_local_0001
14/01/06 20:11:19 INFO mapred.JobClient: Counters: 23
14/01/06 20:11:19 INFO mapred.JobClient: File Input Format Counters
14/01/06 20:11:19 INFO mapred.JobClient: Bytes Read=289074
14/01/06 20:11:19 INFO mapred.JobClient: File Output Format Counters
14/01/06 20:11:19 INFO mapred.JobClient: Bytes Written=5707
14/01/06 20:11:19 INFO mapred.JobClient: FileSystemCounters
14/01/06 20:11:19 INFO mapred.JobClient: FILE_BYTES_READ=19185
14/01/06 20:11:19 INFO mapred.JobClient: HDFS_BYTES_READ=1254215
14/01/06 20:11:19 INFO mapred.JobClient: FILE_BYTES_WRITTEN=270933
14/01/06 20:11:19 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=5707
14/01/06 20:11:19 INFO mapred.JobClient: Map-Reduce Framework
14/01/06 20:11:19 INFO mapred.JobClient: Map output materialized bytes=5633
14/01/06 20:11:19 INFO mapred.JobClient: Map input records=5
14/01/06 20:11:19 INFO mapred.JobClient: Reduce shuffle bytes=0
14/01/06 20:11:19 INFO mapred.JobClient: Spilled Records=10
14/01/06 20:11:19 INFO mapred.JobClient: Map output bytes=5583
14/01/06 20:11:19 INFO mapred.JobClient: Total committed heap usage (bytes)=991539200
14/01/06 20:11:19 INFO mapred.JobClient: CPU time spent (ms)=0
14/01/06 20:11:19 INFO mapred.JobClient: Map input bytes=289074
14/01/06 20:11:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=627
14/01/06 20:11:19 INFO mapred.JobClient: Combine input records=0
14/01/06 20:11:19 INFO mapred.JobClient: Reduce input records=5
14/01/06 20:11:19 INFO mapred.JobClient: Reduce input groups=3
14/01/06 20:11:19 INFO mapred.JobClient: Combine output records=0
14/01/06 20:11:19 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
14/01/06 20:11:19 INFO mapred.JobClient: Reduce output records=7
14/01/06 20:11:19 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
14/01/06 20:11:19 INFO mapred.JobClient: Map output records=5
关键1234529857009重复两次这是不正常的。任何想法为什么发生这种情况。
感谢
您可以检查值并告诉我们每个键提供了多少个值以及它们有多少个不同? – Mehraban
谢谢。有两个不同的密钥,即111-00-1234195967001和1234529857009.第一个产生2个值,第二个密钥提供3个值。但是,这三者是分开的,两个值分别来自一个还原器和第三个还原器。现在simplefish说这是一个正常的行为,这又是一个问题。我在simplefish回复评论中解释了它为我创造的问题。我正在使用单个节点。 – shujaat