我有地图缩小.scala
文件是这样的:大厦罐妥善SBT
import org.apache.spark._
object WordCount {
def main(args: Array[String]){
val inputDir = args(0)
//val inputDir = "/Users/eksi/Desktop/sherlock.txt"
val outputDir = args(1)
//val outputDir = "/Users/eksi/Desktop/out.txt"
val cnf = new SparkConf().setAppName("Example MapReduce Spark Job")
val sc = new SparkContext(cnf)
val textFile = sc.textFile(inputDir)
val counts = textFile.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
counts.saveAsTextFile(outputDir)
sc.stop()
}
}
当我运行我的代码,以setMaster("local[1]")
参数正常工作。
我想将此代码放入.jar
并将其投入S3以使用AWS EMR。因此,我使用以下build.sbt
来做到这一点。
name := "word-count"
version := "0.0.1"
scalaVersion := "2.11.7"
// additional libraries
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.10" % "1.0.2"
)
它生成一个jar文件,但是没有我的scala代码在那里。我看到的只是一个清单文件是当我提取.jar
当我运行sbt package
这是我得到什么:
[myMacBook-Pro] > sbt package
[info] Loading project definition from /Users/lele/bigdata/wordcount/project
[info] Set current project to word-count (in build file:/Users/lele/bigdata/wordcount/)
[info] Packaging /Users/lele/bigdata/wordcount/target/scala-2.11/word-count_2.11-0.0.1.jar ...
[info] Done packaging.
[success] Total time: 0 s, completed Jul 27, 2016 10:33:26 PM
我应该怎么做才能创造一个适当的JAR文件就像
WordCount.jar WordCount
你是如何创建的jar? –
通过从'build.sbt'生命的终端调用'sbt clean compile package' –
并且你没有看到你的对象'WordCount'?奇怪的是,我希望你只能看到没有“火星核心”依赖的情况。你看过'sbt package'构建日志吗?有什么特别的? –