2013-07-08 35 views
1

我对K均值聚类使用Mahout命令,输入文件为 “ KMeansData.csv”,数据的格式如下,ClassCastException:org.apache.hadoop.io.Text无法转换为K-均值聚类中的org.apache.hadoop.io.IntWritable Mahout

John,M,30,Pepsi,US 
Jack,M,25,Coke,US 
David,M,34,Pepsi,UK 
Ted,M,37,Limca,CAN 
Robert,M,23,Limca,US 
Adrian,M,31,Pepsi,US 
Craig,M,37,Coke,UK 
Katie,F,23,Limca,UK 
Nancy,F,32,Pepsi,UK 

我能够顺利地完成下面的步骤,他们是,

./mahout seqdirectory -i /root/Mahout/Clustering/ -o /root/Mahout/temp/parsedtext-seqdir -c UTF-8 -chunk 1 

./mahout seq2sparse -i /root/Mahout/temp/parsedtext-seqdir -o /root/Mahout/temp/parsedtext-seqdir-sparse-kmeans --maxDFPercent 85 --namedVector 

./mahout kmeans -i /root/Mahout/temp/parsedtext-seqdir-sparse-kmeans/tfidf-vectors/ -c /root/Mahout/temp/parsedtext-kmeans-clusters -o /root/Mahout/reuters21578/root/Mahout/temp/parsedtext-kmeans -dm org.apache.mahout.common.distance.CosineDistanceMeasure -x 10 -k 5 -ow --clustering -cl 

但是当我使用clustedump:

./mahout clusterdump -i /root/Mahout/temp/parsedtext-kmeans-clusters -d /root/Mahout/temp/parsedtext-seqdir-sparse-kmeans/dictionary.file-0 -dt sequencefile -b 100 -n 20 --evaluate -dm org.apache.mahout.common.distance.CosineDistanceMeasure --pointsDir /root/Mahout/temp/parsedtext-kmeans-clusters -o /root/Mahout/temp/cluster-output.txt 

它给了我下面的错误,

Exception in thread "main" java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.IntWritable 
at org.apache.mahout.utils.clustering.ClusterDumper.readPoints(ClusterDumper.java:298) 
at org.apache.mahout.utils.clustering.ClusterDumper.init(ClusterDumper.java:245) 
at org.apache.mahout.utils.clustering.ClusterDumper.run(ClusterDumper.java:152) 
at org.apache.mahout.utils.clustering.ClusterDumper.main(ClusterDumper.java:102) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
at java.lang.reflect.Method.invoke(Method.java:597) 
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) 
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195) 

可以对如何消除这种误差,使用命令,我没有任何Java程序在这里,我可以调整人的想法。

回答

1

输入到clusterdump应该

./mahout clusterdump -i /root/Mahout/temp/parsedtext-kmeans/clusteredPoints 

这是

一个sequencefile而不是把它带在身边。