2017-11-25 122 views
1

我有一个项目有多个scala spark程序,而我通过eclipse运行mvn install我能够得到正确的jar生成的使用spark-submit命令运行Maven使用jenkins for scala编译spark程序:“没有主要的神器安装,而是安装附加的工件

将代码推送到GIT之后,我们试图使用jenkins来构建它,因为我们想要使用无法自动将jar文件推送到我们的hadoop集群 我们有jenkinsfile与建立目标为“编译软件包安装-X”

日志显示 -

[DEBUG](f)artifact = com.esi.rxhome:PROJECT1:jar:0.0.1-SNAPSHOT 

[DEBUG](f) attachedArtifacts = [com.esi.rxhome:PROJECT1:jar:jar- with-dependencies:0.0.1-SNAPSHOT, com.esi.rxhome:PROJECT1:jar:jar-with-dependencies:0.0.1-SNAPSHOT] 

[DEBUG] (f) createChecksum = false 

[DEBUG] (f) localRepository =  id: local 
    url: file:///home/jenkins/.m2/repository/ 
layout: default 
snapshots: [enabled => true, update => always] 
releases: [enabled => true, update => always] 

[DEBUG] (f) packaging = jar 

[DEBUG] (f) pomFile = /opt/jenkins-data/workspace/ng_datahub-pipeline_develop-JYTJLDEXV65VZWDCZAXG5Y7SHBG2534GFEF3OF2WC4543G6ANZYA/pom.xml 

[DEBUG] (s) skip = false 

[DEBUG] (f) updateReleaseInfo = false 

[DEBUG] -- end configuration -- 

[INFO] **No primary artifact to install, installing attached artifacts instead** 

我看到了类似的帖子错误 -

Maven: No primary artifact to install, installing attached artifacts instead

但这里的答案说 - 自动删除干净,我不知道如何停止,尽管詹金斯正在建设的jar文件。

下面是pom.xml-

  <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
       xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> 
       <modelVersion>4.0.0</modelVersion> 
       <groupId>com.esi.rxhome</groupId> 
       <artifactId>PROJECT1</artifactId> 
       <version>0.0.1-SNAPSHOT</version> 
       <packaging>jar</packaging> 
       <name>${project.artifactId}</name> 
       <description>RxHomePreprocessing</description> 
       <inceptionYear>2015</inceptionYear> 
       <licenses> 
        <license> 
         <name>My License</name> 
         <url>http://....</url> 
         <distribution>repo</distribution> 
        </license> 
       </licenses> 

       <properties> 
        <maven.compiler.source>1.8</maven.compiler.source> 
        <maven.compiler.target>1.8</maven.compiler.target> 
        <encoding>UTF-8</encoding> 
        <scala.version>2.10.6</scala.version> 
        <scala.compat.version>2.10</scala.compat.version> 
       </properties> 

       <dependencies> 
        <dependency> 
         <groupId>org.scala-lang</groupId> 
         <artifactId>scala-library</artifactId> 
         <version>${scala.version}</version> 
        </dependency> 

        <!-- Test --> 
        <dependency> 
         <groupId>junit</groupId> 
         <artifactId>junit</artifactId> 
         <version>4.11</version> 
         <scope>test</scope> 
        </dependency> 
        <dependency> 
         <groupId>org.specs2</groupId> 
         <artifactId>specs2-core_${scala.compat.version}</artifactId> 
         <version>2.4.16</version> 
         <scope>test</scope> 
        </dependency> 
        <dependency> 
         <groupId>org.scalatest</groupId> 
         <artifactId>scalatest_${scala.compat.version}</artifactId> 
         <version>2.2.4</version> 
         <scope>test</scope> 
        </dependency> 
        <dependency> 
         <groupId>org.apache.hive</groupId> 
         <artifactId>hive-jdbc</artifactId> 
         <version>1.2.1000.2.6.0.3-8</version> 
        </dependency> 


        <!-- <dependency> 
       <groupId>org.apache.spark</groupId> 
       <artifactId>spark-core_2.10</artifactId> 
       <version>2.1.0</version> 
      </dependency> 

        <dependency> 
       <groupId>org.apache.spark</groupId> 
       <artifactId>spark-sql_2.10</artifactId> 
       <version>2.1.0</version> 
      </dependency> --> 


        <dependency> 
         <groupId>org.apache.spark</groupId> 
         <artifactId>spark-core_2.10</artifactId> 
         <version>1.6.3</version> 
        </dependency> 
        <dependency> 
         <groupId>org.apache.spark</groupId> 
         <artifactId>spark-sql_2.10</artifactId> 
         <version>1.6.3</version> 
        </dependency> 
        <dependency> 
         <groupId>org.apache.spark</groupId> 
         <artifactId>spark-hive_2.10</artifactId> 
         <version>1.6.3</version> 
        </dependency> 

        <dependency> 
         <groupId>com.databricks</groupId> 
         <artifactId>spark-csv_2.10</artifactId> 
         <version>1.5.0</version> 
        </dependency> 

       </dependencies> 

       <build> 
        <sourceDirectory>src/main/scala</sourceDirectory> 
        <testSourceDirectory>src/test/scala</testSourceDirectory> 
        <plugins> 
         <plugin> 
          <!-- see http://davidb.github.com/scala-maven-plugin --> 
          <groupId>net.alchim31.maven</groupId> 
          <artifactId>scala-maven-plugin</artifactId> 
          <version>3.2.0</version> 
          <executions> 
           <execution> 
            <goals> 
             <goal>compile</goal> 
             <goal>testCompile</goal> 
            </goals> 
            <configuration> 
             <args> 
              <arg>-make:transitive</arg> 
              <arg>-dependencyfile</arg> 
              <arg>${project.build.directory}/.scala_dependencies</arg> 
             </args> 
            </configuration> 
           </execution> 
          </executions> 
         </plugin> 
         <plugin> 
          <groupId>org.apache.maven.plugins</groupId> 
          <artifactId>maven-surefire-plugin</artifactId> 
          <version>2.18.1</version> 
          <configuration> 
           <useFile>false</useFile> 
           <disableXmlReport>true</disableXmlReport> 
           <!-- If you have classpath issue like NoDefClassError,... --> 
           <!-- useManifestOnlyJar>false</useManifestOnlyJar --> 
           <includes> 
            <include>**/*Test.*</include> 
            <include>**/*Suite.*</include> 
           </includes> 
          </configuration> 
         </plugin> 
         <plugin> 
              <groupId>org.apache.maven.plugins</groupId> 
              <artifactId>maven-jar-plugin</artifactId> 
              <version>2.4</version> 
              <configuration> 
               <skipIfEmpty>true</skipIfEmpty> 
               </configuration> 
            <executions> 
            <execution> 
           <goals> 
           <goal>jar</goal> 
         </goals> 
         </execution> 
         </executions> 
         </plugin> 

         <plugin> 
          <groupId>org.apache.maven.plugins</groupId> 
          <artifactId>maven-assembly-plugin</artifactId> 
          <version>3.0.0</version> 
          <configuration> 
           <descriptorRefs> 
            <descriptorRef>jar-with-dependencies</descriptorRef> 
           </descriptorRefs> 
           <archive> 
            <manifest> 
            <mainClass>com.esi.spark.storedprocedure.Test_jdbc_nospark</mainClass> 
            </manifest> 
           </archive> 
          </configuration> 
          <executions> 
           <execution> 
           <id>make-assembly</id> 
           <phase>package</phase> 
           <goals> 
            <goal>single</goal> 
           </goals> 
           </execution> 
          </executions> 
         </plugin> 

         <plugin> 
          <artifactId>maven-clean-plugin</artifactId> 
          <version>3.0.0</version> 
          <configuration> 
          <skip>true</skip> 
          </configuration> 
         </plugin> 

        </plugins> 
       </build> 
      </project> 

我试图指定

1 - “罐” 中的pom.xml包装。

2 - 改变Maven目标来 -

“安装”

“全新安装”

“编译包安装”

但上面的尝试并没有帮助摆脱消息和创建的jar是没用的。

当我尝试执行火花提交命令 -

  spark-submit --driver-java-options -Djava.io.tmpdir=/home/EH2524/tmp --conf spark.local.dir=/home/EH2524/tmp --driver-memory 2G --executor-memory 2G --total-executor-cores 1 --num-executors 10 --executor-cores 10 --class com.esi.spark.storedprocedure.Test_jdbc_nospark --master yarn /home/EH2524/PROJECT1-0.0.1-20171124.213717-1-jar-with-dependencies.jar 
      Multiple versions of Spark are installed but SPARK_MAJOR_VERSION is not set 
      Spark1 will be picked by default 
      java.lang.ClassNotFoundException: com.esi.spark.storedprocedure.Test_jdbc_nospark 
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
        at java.lang.Class.forName0(Native Method) 
        at java.lang.Class.forName(Class.java:348) 
        at org.apache.spark.util.Utils$.classForName(Utils.scala:175) 
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:703) 
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

这里,Test_jdbc_nospark是Scala的对象。

回答

0

此消息是因为maven-jar-plugin具有真实性。一旦我删除此,构建没有给消息“没有主神器安装,安装附加的文物,而不是”

空罐子在pom.xml中得到创建,因为不正确的路径的

Intitally-

<build> 
     <sourceDirectory>src/main/scala</sourceDirectory> 

由于jenkins是通过git中的代码构建的,而pom在项目文件夹中。

<build> 
    <sourceDirectory>folder_name_in_git/src/main/scala</sourceDirectory> 
0

我不确定,但您的maven-jar-plugin配置看起来很可疑。通常,执行将指定阶段,如

<execution> 
    <phase>package</phase> 
    <goals> 
     <goal>jar</goal> 
    </goals> 

(来自this example)。也许省略这会导致你的默认jar不能被构建?当然,错误消息听起来像你的默认jar没有被构建,但你实际上并没有这么说。

+0

此消息是因为具有 作为真正的行家-JAR-插件。一旦我删除了这个,生成没有给出消息“没有主要的神器安装,安装附加的工件,而不是” 但另一个问题是,创建的jar(主要工件)没有所需的scala对象,因此我是无法运行斯卡拉火花程序。创建的jar只有3kb大小,也只是为了在eclipse中运行maven install时提供更多信息,创建的jar是正确的,我能够运行scala spark程序。 – Jim

+0

进一步挖掘,我看到詹金斯正在打印消息,没有编译的源代码导致创建一个空罐子。 ? ('[INFO] --- maven-compiler-plugin:3.1:compile(default-compile) @ PROJECT1 --- [INFO] No source to compile'?) 试图明白为什么它无法获得来源编译。 @Joe – Jim

+0

在Eclipse前端(我使用IntelliJ)无法提供帮助,但听起来您的Eclipse项目似乎没有使用Maven使用的默认配置。 –