2013-05-30 22 views
1

我试图运行使用Mahout.Following聚类程序是我的Java代码,我使用IO异常,同时运行K-均值使用象夫和Hadoop罐子

但聚类当我运行它,它开始正常执行,但在最后给我一个错误.. 以下是我正在运行它时得到的堆栈跟踪。

13/05/30 09:49:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: Input: /home/vishal/testdata/points Clusters In: /home/vishal/testdata/clusters Out: /home/vishal/output Distance: org.apache.mahout.common.distance.EuclideanDistanceMeasure 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: convergence: 0.0010 max Iterations: 10 num Reduce Tasks: org.apache.mahout.math.VectorWritable Input Vectors: {} 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: K-Means Iteration 1 
13/05/30 09:49:22 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-1 
13/05/30 09:49:23 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
13/05/30 09:49:23 INFO input.FileInputFormat: Total input paths to process : 1 
13/05/30 09:49:23 INFO mapred.JobClient: Running job: job_local_0001 
13/05/30 09:49:23 INFO util.ProcessTree: setsid exited with exit code 0 
13/05/30 09:49:23 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:23 INFO mapred.MapTask: io.sort.mb = 100 
13/05/30 09:49:23 INFO mapred.MapTask: data buffer = 79691776/99614720 
13/05/30 09:49:23 INFO mapred.MapTask: record buffer = 262144/327680 
13/05/30 09:49:23 INFO mapred.MapTask: Starting flush of map output 
13/05/30 09:49:23 INFO mapred.MapTask: Finished spill 0 
13/05/30 09:49:23 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:24 INFO mapred.JobClient: map 0% reduce 0% 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done. 
13/05/30 09:49:26 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Merger: Merging 1 sorted segments 
13/05/30 09:49:26 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 185 bytes 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now 
13/05/30 09:49:26 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to /home/vishal/output/clusters-1 
13/05/30 09:49:27 INFO mapred.JobClient: map 100% reduce 0% 
13/05/30 09:49:29 INFO mapred.LocalJobRunner: reduce > reduce 
13/05/30 09:49:29 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done. 
13/05/30 09:49:30 INFO mapred.JobClient: map 100% reduce 100% 
13/05/30 09:49:30 INFO mapred.JobClient: Job complete: job_local_0001 
13/05/30 09:49:30 INFO mapred.JobClient: Counters: 21 
13/05/30 09:49:30 INFO mapred.JobClient: File Output Format Counters 
13/05/30 09:49:30 INFO mapred.JobClient:  Bytes Written=474 
13/05/30 09:49:30 INFO mapred.JobClient: Clustering 
13/05/30 09:49:30 INFO mapred.JobClient:  Converged Clusters=1 
13/05/30 09:49:30 INFO mapred.JobClient: FileSystemCounters 
13/05/30 09:49:30 INFO mapred.JobClient:  FILE_BYTES_READ=3328461 
13/05/30 09:49:30 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=3422872 
13/05/30 09:49:30 INFO mapred.JobClient: File Input Format Counters 
13/05/30 09:49:30 INFO mapred.JobClient:  Bytes Read=443 
13/05/30 09:49:30 INFO mapred.JobClient: Map-Reduce Framework 
13/05/30 09:49:30 INFO mapred.JobClient:  Map output materialized bytes=189 
13/05/30 09:49:30 INFO mapred.JobClient:  Map input records=9 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce shuffle bytes=0 
13/05/30 09:49:30 INFO mapred.JobClient:  Spilled Records=6 
13/05/30 09:49:30 INFO mapred.JobClient:  Map output bytes=531 
13/05/30 09:49:30 INFO mapred.JobClient:  Total committed heap usage (bytes)=325713920 
13/05/30 09:49:30 INFO mapred.JobClient:  CPU time spent (ms)=0 
13/05/30 09:49:30 INFO mapred.JobClient:  SPLIT_RAW_BYTES=104 
13/05/30 09:49:30 INFO mapred.JobClient:  Combine input records=9 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce input records=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce input groups=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Combine output records=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Physical memory (bytes) snapshot=0 
13/05/30 09:49:30 INFO mapred.JobClient:  Reduce output records=3 
13/05/30 09:49:30 INFO mapred.JobClient:  Virtual memory (bytes) snapshot=0 
13/05/30 09:49:30 INFO mapred.JobClient:  Map output records=9 
13/05/30 09:49:30 INFO kmeans.KMeansDriver: K-Means Iteration 2 
13/05/30 09:49:30 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-2 
13/05/30 09:49:30 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
13/05/30 09:49:30 INFO input.FileInputFormat: Total input paths to process : 1 
13/05/30 09:49:30 INFO mapred.JobClient: Running job: job_local_0002 
13/05/30 09:49:30 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:30 INFO mapred.MapTask: io.sort.mb = 100 
13/05/30 09:49:30 INFO mapred.MapTask: data buffer = 79691776/99614720 
13/05/30 09:49:30 INFO mapred.MapTask: record buffer = 262144/327680 
13/05/30 09:49:30 INFO mapred.MapTask: Starting flush of map output 
13/05/30 09:49:30 INFO mapred.MapTask: Finished spill 0 
13/05/30 09:49:30 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:31 INFO mapred.JobClient: map 0% reduce 0% 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done. 
13/05/30 09:49:33 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Merger: Merging 1 sorted segments 
13/05/30 09:49:33 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now 
13/05/30 09:49:33 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to /home/vishal/output/clusters-2 
13/05/30 09:49:34 INFO mapred.JobClient: map 100% reduce 0% 
13/05/30 09:49:36 INFO mapred.LocalJobRunner: reduce > reduce 
13/05/30 09:49:36 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done. 
13/05/30 09:49:37 INFO mapred.JobClient: map 100% reduce 100% 
13/05/30 09:49:37 INFO mapred.JobClient: Job complete: job_local_0002 
13/05/30 09:49:37 INFO mapred.JobClient: Counters: 20 
13/05/30 09:49:37 INFO mapred.JobClient: File Output Format Counters 
13/05/30 09:49:37 INFO mapred.JobClient:  Bytes Written=364 
13/05/30 09:49:37 INFO mapred.JobClient: FileSystemCounters 
13/05/30 09:49:37 INFO mapred.JobClient:  FILE_BYTES_READ=6658544 
13/05/30 09:49:37 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=6844248 
13/05/30 09:49:37 INFO mapred.JobClient: File Input Format Counters 
13/05/30 09:49:37 INFO mapred.JobClient:  Bytes Read=443 
13/05/30 09:49:37 INFO mapred.JobClient: Map-Reduce Framework 
13/05/30 09:49:37 INFO mapred.JobClient:  Map output materialized bytes=128 
13/05/30 09:49:37 INFO mapred.JobClient:  Map input records=9 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce shuffle bytes=0 
13/05/30 09:49:37 INFO mapred.JobClient:  Spilled Records=4 
13/05/30 09:49:37 INFO mapred.JobClient:  Map output bytes=531 
13/05/30 09:49:37 INFO mapred.JobClient:  Total committed heap usage (bytes)=525074432 
13/05/30 09:49:37 INFO mapred.JobClient:  CPU time spent (ms)=0 
13/05/30 09:49:37 INFO mapred.JobClient:  SPLIT_RAW_BYTES=104 
13/05/30 09:49:37 INFO mapred.JobClient:  Combine input records=9 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce input records=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce input groups=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Combine output records=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Physical memory (bytes) snapshot=0 
13/05/30 09:49:37 INFO mapred.JobClient:  Reduce output records=2 
13/05/30 09:49:37 INFO mapred.JobClient:  Virtual memory (bytes) snapshot=0 
13/05/30 09:49:37 INFO mapred.JobClient:  Map output records=9 
13/05/30 09:49:37 INFO kmeans.KMeansDriver: K-Means Iteration 3 
13/05/30 09:49:37 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-3 
13/05/30 09:49:37 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
13/05/30 09:49:37 INFO input.FileInputFormat: Total input paths to process : 1 
13/05/30 09:49:37 INFO mapred.JobClient: Running job: job_local_0003 
13/05/30 09:49:37 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:37 INFO mapred.MapTask: io.sort.mb = 100 
13/05/30 09:49:37 INFO mapred.MapTask: data buffer = 79691776/99614720 
13/05/30 09:49:37 INFO mapred.MapTask: record buffer = 262144/327680 
13/05/30 09:49:37 INFO mapred.MapTask: Starting flush of map output 
13/05/30 09:49:37 INFO mapred.MapTask: Finished spill 0 
13/05/30 09:49:37 INFO mapred.Task: Task:attempt_local_0003_m_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:38 INFO mapred.JobClient: map 0% reduce 0% 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task 'attempt_local_0003_m_000000_0' done. 
13/05/30 09:49:40 INFO mapred.Task: Using ResourceCalculatorPlugin : [email protected] 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Merger: Merging 1 sorted segments 
13/05/30 09:49:40 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task:attempt_local_0003_r_000000_0 is done. And is in the process of commiting 
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task attempt_local_0003_r_000000_0 is allowed to commit now 
13/05/30 09:49:40 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0003_r_000000_0' to /home/vishal/output/clusters-3 
13/05/30 09:49:41 INFO mapred.JobClient: map 100% reduce 0% 
13/05/30 09:49:43 INFO mapred.LocalJobRunner: reduce > reduce 
13/05/30 09:49:43 INFO mapred.Task: Task 'attempt_local_0003_r_000000_0' done. 
13/05/30 09:49:44 INFO mapred.JobClient: map 100% reduce 100% 
13/05/30 09:49:44 INFO mapred.JobClient: Job complete: job_local_0003 
13/05/30 09:49:44 INFO mapred.JobClient: Counters: 21 
13/05/30 09:49:44 INFO mapred.JobClient: File Output Format Counters 
13/05/30 09:49:44 INFO mapred.JobClient:  Bytes Written=364 
13/05/30 09:49:44 INFO mapred.JobClient: Clustering 
13/05/30 09:49:44 INFO mapred.JobClient:  Converged Clusters=2 
13/05/30 09:49:44 INFO mapred.JobClient: FileSystemCounters 
13/05/30 09:49:44 INFO mapred.JobClient:  FILE_BYTES_READ=9988052 
13/05/30 09:49:44 INFO mapred.JobClient:  FILE_BYTES_WRITTEN=10265506 
13/05/30 09:49:44 INFO mapred.JobClient: File Input Format Counters 
13/05/30 09:49:44 INFO mapred.JobClient:  Bytes Read=443 
13/05/30 09:49:44 INFO mapred.JobClient: Map-Reduce Framework 
13/05/30 09:49:44 INFO mapred.JobClient:  Map output materialized bytes=128 
13/05/30 09:49:44 INFO mapred.JobClient:  Map input records=9 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce shuffle bytes=0 
13/05/30 09:49:44 INFO mapred.JobClient:  Spilled Records=4 
13/05/30 09:49:44 INFO mapred.JobClient:  Map output bytes=531 
13/05/30 09:49:44 INFO mapred.JobClient:  Total committed heap usage (bytes)=724434944 
13/05/30 09:49:44 INFO mapred.JobClient:  CPU time spent (ms)=0 
13/05/30 09:49:44 INFO mapred.JobClient:  SPLIT_RAW_BYTES=104 
13/05/30 09:49:44 INFO mapred.JobClient:  Combine input records=9 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce input records=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce input groups=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Combine output records=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Physical memory (bytes) snapshot=0 
13/05/30 09:49:44 INFO mapred.JobClient:  Reduce output records=2 
13/05/30 09:49:44 INFO mapred.JobClient:  Virtual memory (bytes) snapshot=0 
13/05/30 09:49:44 INFO mapred.JobClient:  Map output records=9 
Exception in thread "main" java.io.IOException: Target /home/vishal/output/clusters-3-final/clusters-3 is a directory 
    at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:359) 
    at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:361) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:211) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163) 
    at org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:287) 
    at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:425) 
    at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClustersMR(KMeansDriver.java:322) 
    at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:239) 
    at org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:154) 
    at com.ClusteringDemo.main(ClusteringDemo.java:80) 

可能是什么原因?

感谢

+0

你真的需要**开始读取错误信息**。显然存在的目录比不应该存在。 –

回答

3

这里是KMeansDriver正在试图做的事:

Path finalClustersIn = new Path(output, AbstractCluster.CLUSTERS_DIR + (iteration-1) + "-final"); 
FileSystem.get(conf).rename(new Path(output, AbstractCluster.CLUSTERS_DIR + (iteration-1)), finalClustersIn); 

正如你可以看到,它已经收敛后3次迭代,并试图在目录合并第三次迭代的结果cluster-3分为簇-3最终以显示它已完成。

现在的rename方法在实际重命名之前进行检查,以确保它不会尝试重命名为已存在的目录。事实上,它看起来像你已经有这个目录群集-3最终,可能来自以前的运行。

删除这个目录应该解决您的问题,您可以通过使用命令行做到这一点:

hadoop fs -rmr /home/vishal/output/clusters-3-final 

还是因为它看起来像你正在运行在本地模式下你的工作:

rm -rf /home/vishal/output/clusters-3-final 

为了避免这种问题,我建议您每次运行分析时都使用一个唯一的输出目录,例如可以取当前日期并将其附加到输出文件名Path,例如使用System.currentTimeMillis()

编辑:你对第二个问题:

Exception in thread "main" java.io.IOException: wrong value class: 0.0: null is not class org.apache.mahout.clustering.WeightedPropertyVectorWritable at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1932) at com.ClusteringDemo.main(ClusteringDemo.java:90) 

你实际上是从亨利马乌版本之间有冲突痛苦,因为旧的亨利马乌版本使用WeightedVectorWritable而更近的使用WeightedPropertyVectorWritable。为了解决这个问题,只是从改变你的声明变量value的:

WeightedVectorWritable value = new WeightedVectorWritable(); 

到:

WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable(); 
+0

嗨查尔斯............谢谢。但是我从你的解释中了解到,它是由于目录的存在,程序在第三个集群的终点。所以我删除了该目录并再次运行代码..但仍存在同样的问题.............我认为我做了其他不需要的东西...是这样吗?....其实我只是这方面的初学者。所以这就是为什么混乱?谢谢 –

+0

其实它看起来像你在本地模式下运行你的工作,你可以删除你的本地磁盘上的这个目录,然后再试一次吗? –

+0

感谢您得到它>但现在它thwoing以下异常;;; –