尝试使用Streaming在Hadoop上运行mapreduce作业。我有两个ruby脚本wcmapper.rb和wcreducer.rb。我试图运行如下的工作:Hadoop Streaming - 外部映射器脚本 - 文件未找到
hadoop jar hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar -file wcmapper.rb -mapper wcmapper.rb -file wcreducer.rb -reducer wcreducer.rb -input test.txt -output output
这在控制台导致以下错误消息:
13/11/26 12:54:07 INFO streaming.StreamJob: map 0% reduce 0%
13/11/26 12:54:36 INFO streaming.StreamJob: map 100% reduce 100%
13/11/26 12:54:36 INFO streaming.StreamJob: To kill this job, run:
13/11/26 12:54:36 INFO streaming.StreamJob: /home/paul/bin/hadoop-1.2.1/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201311261104_0009
13/11/26 12:54:36 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201311261104_0009
13/11/26 12:54:36 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201311261104_0009_m_000000
13/11/26 12:54:36 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
纵观失败的尝试任何的任务显示:
java.io.IOException: Cannot run program "/var/lib/hadoop/mapred/local/taskTracker/paul/jobcache/job_201311261104_0010/attempt_201311261104_0010_m_000001_3/work/./wcmapper.rb": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)
据我所知,hadoop需要复制地图和reducer脚本供所有节点使用,并相信这是-file参数的用途。但是,似乎脚本没有被复制到hadoop希望找到它们的位置。控制台表明他们正在打包,我认为:
packageJobJar: [wcmapper.rb, wcreducer.rb, /var/lib/hadoop/hadoop-unjar3547645655567272034/] [] /tmp/streamjob3978604690657430710.jar tmpDir=null
我也曾尝试以下操作:
hadoop jar hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar -files wcmapper.rb,wcreducer.rb -mapper wcmapper.rb -reducer wcreducer.rb -input test.txt -output output
但是这给了同样的错误。
谁能告诉我问题是什么?
或在哪里可以更好地诊断问题?
非常感谢
保罗