2016-07-07 89 views
-2

我正在尝试使用spark KMeans提交spark任务。我正确地打包了scala文件,但是当我想提交作业时,我总是遇到ClassNotFoundException。 这里是我的SBT并祝:spark执行kmeans的ClassNotFoundException错误

名:= “sparkKmeans”

libraryDependencies + = “org.apache.spark” %% “火花核” % “1.1.1”

,这里是我的斯卡拉类:

import org.apache.spark.SparkContext 
import org.apache.spark.SparkContext._ 
import org.apache.spark.SparkConf 
import org.apache.spark.mllib.clustering.{KMeans, KMeansModel} 
import org.apache.spark.mllib.linalg.Vectors 
object sparkKmeans { 
    def main(args: Array[String]) { 
// create Spark context with Spark configuration 
val sc = new SparkContext(new SparkConf().setAppName("SparkKmeans"))  
//val threshold = args(1).toInt 
// Load and parse the data. source is the first argument. 
    val data = sc.textFile(args(0))  
    val parsedData = data.map(s => Vectors.dense(s.split(' ').map(_.toDouble))).cache() 
    // Cluster the data into classes using KMeans. number of itteration is fixed as 100 
    // and number of clusters is get from the input -second argument 
    val numClusters = args(1) 
    val numIterations = 100 
    val clusters = KMeans.train(parsedData, numClusters, numIterations) 

    // Evaluate clustering by computing Within Set Sum of Squared Errors 
    val WSSSE = clusters.computeCost(parsedData) 
    println("Within Set Sum of Squared Errors = " + WSSSE) 

    // Save and load model based on thirs argument. 
    //clusters.save(sc, args(2)) 
    // val sameModel = KMeansModel.load(sc, args(2))  
    } 
} 

我已经因为我看到一些地方最后两行评论说,火花与串行问题。但仍然有问题。

,这里是错误:

java.lang.ClassNotFoundException: sparkKmeans 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366) 
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) 
at java.security.AccessController.doPrivileged(Native Method) 
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) 
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
at java.lang.Class.forName0(Native Method) 
at java.lang.Class.forName(Class.java:278) 
at org.apache.spark.util.Utils$.classForName(Utils.scala:174) 
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689) 
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

,并提交使用作业:

./bin/spark-shell --class sparkKmeans ...... 

如果有人能帮助我,我将不胜感激。

+0

构建定义中缺少'spark-mllib'依赖项。更何况使用Spark 1.1只是疯狂。目前在1.6/2.0。 – zero323

+0

如何将应用程序打包到jar文件中?你可以在--class之后写下完整的命令吗? – Abhi

回答

0

感谢您的意见。 我做了你所说的: Built.sbt文件: 名:

libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.6.1", 
"org.apache.spark" % "spark-mllib_2.10" % "1.6.1" 
) 

(我用2.11.8阶和Spark 1.6.1版本,但仍是同样的错误=“sparkKmeans” 兼谈其他问题: 我使用的包装我的应用程序: SBT 编译 包

和执行使用:

./bin/spark-submit --class sparkKmeans k/kmeans/target/scala-2.10/sparkkmeans_2.10-0.1-SNAPSHOT.jar '/home/meysam/spark-1.6.1/kmeans/pima.csv' 3