2016-09-02 52 views
2

我得到的错误,当我试图连接蜂房表(被 通过HbaseIntegration创建)火花SparkSQL +蜂巢+ HBase的+ HbaseIntegration不起作用

我的步骤如下: 蜂巢表创建代码

CREATE TABLE test.sample(id string,name string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH  
SERDEPROPERTIES ("hbase.columns.mapping" = ":key,details:name") 
TBLPROPERTIES ("hbase.table.name" = "sample"); 

DESCRIBE TEST;

col_name data_type comment 
id string from deserializer 
name string from deserializer 

开始星火外壳用这个命令:

spark-shell --master local[2] --driver-class-path /usr/local/hive/lib/hive- 
hbase-handler-1.2.1.jar: 
/usr/local/hbase/lib/hbase-server-0.98.9- 
hadoop2.jar:/usr/local/hbase/lib/hbase-protocol-0.98.9-hadoo2.jar: 
/usr/local/hbase/lib/hbase-hadoop2-compat-0.98.9- 
hadoop2.jar:/usr/local/hbase/lib/hbase-hadoop-compat-0.98.9-hadoop2.jar: 
/usr/local/hbase/lib/hbase-client-0.98.9- 
hadoop2.jar:/usr/local/hbase/lib/hbase-common-0.98.9-hadoop2.jar: 
/usr/local/hbase/lib/htrace-core-2.04.jar:/usr/local/hbase/lib/hbase-common- 
0.98.9-hadoop2-tests.jar: 
/usr/local/hbase/lib/hbase-server-0.98.9-hadoop2- 
tests.jar:/usr/local/hive/lib/zookeeper-3.4.6.jar:/usr/local/hive/lib/guava- 
14.0.1.jar 

在火花外壳:

val sqlContext=new org.apache.spark.sql.hive.HiveContext(sc) 

sqlContext.sql(“select count(*) from test.sample”).collect() 

堆栈跟踪

堆栈SQLÇ ontext以sqlContext的形式提供。

scala> sqlContext.sql("select count(*) from test.sample").collect() 

16/09/02 04:49:28 INFO parse.ParseDriver: Parsing command: select count(*) from test.sample 
16/09/02 04:49:35 INFO parse.ParseDriver: Parse Completed 
16/09/02 04:49:40 INFO metastore.HiveMetaStore: 0: get_table : db=test tbl=sample 
16/09/02 04:49:40 INFO HiveMetaStore.audit: ugi=hdfs ip=unknown-ip-addr cmd=get_table : db=test tbl=sample 
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Bytes 
    at org.apache.hadoop.hive.hbase.HBaseSerDe.parseColumnsMapping(HBaseSerDe.java:184) 
    at org.apache.hadoop.hive.hbase.HBaseSerDeParameters.<init>(HBaseSerDeParameters.java:73) 
    at org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:117) 
    at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53) 
    at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521) 
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391) 
    at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276) 
    at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258) 
    at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605) 
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getTableOption$1$$anonfun$3.apply(ClientWrapper.scala:331) 
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getTableOption$1$$anonfun$3.apply(ClientWrapper.scala:326) 
    at scala.Option.map(Option.scala:145) 
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getTableOption$1.apply(ClientWrapper.scala:326) 
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getTableOption$1.apply(ClientWrapper.scala:321) 
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:279) 
    at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:226) 
    at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:225) 
    at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:268) 
    at org.apache.spark.sql.hive.client.ClientWrapper.getTableOption(ClientWrapper.scala:321) 
    at org.apache.spark.sql.hive.client.ClientInterface$class.getTable(ClientInterface.scala:122) 
    at org.apache.spark.sql.hive.client.ClientWrapper.getTable(ClientWrapper.scala:60) 
    at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:384) 
    at org.apache.spark.sql.hive.HiveContext$$anon$2.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:457) 
    at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:161) 
    at org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:457) 
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:303) 

我使用Hadoop 2.6.0,1.6.0火花,蜂巢1.2.1,HBase的0.98.9

我hadoop-env.sh添加此设置为

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_HOME/lib/* 

可一些身体请提出任何解决方案

+0

'java.lang中。NoClassDefFoundError:org/apache/hadoop/hbase/util/Bytes',检查你的路径 –

+0

谢谢亚历山大的回复,我加了classpath as,exportSPARK_HOME =/usr/local/spark export PATH = $ PATH:$ SPARK_HOME/bin:$ SPARK_HOME/sbin export SPARK_CLASSPATH = $ SPARK_HOME/lib:$ HBASE_HOME/lib:$ HIVE_HOME/lib请问我建议我是否有任何错误。 – user6608138

+0

我是新来spark.Now我能够通过SparkSQL查询Hive托管表。但我不知道如何通过SparkSQL查询HbaseStorage处理程序表的配置单元。请你指导我。 谢谢亚历山大。 – user6608138

回答

1
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Bytes 

因为HBase的相关罐子是不存在的类路径

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH: `hbase classpath` 

应包括所有的HBase相关的jar文件 或别人看到我的answer这里使用--jars

注:要验证您可以在驱动程序中添加以下代码类路径,以打印所有的classpath资源

阶版本:

val cl = ClassLoader.getSystemClassLoader 
    cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println) 

的java:

import java.net.URL; 

import java.net.URLClassLoader; 
... 

    ClassLoader cl = ClassLoader.getSystemClassLoader(); 

     URL[] urls = ((URLClassLoader)cl).getURLs(); 

     for(URL url: urls) { 
      System.out.println(url.getFile()); 
     } 
+0

你好,即使我面临同样的问题。上述解决方案不起作用。 –

+0

@RohanNayak:提出描述环境和问题的新问题。这已经是1年多了+老问题了 –

+0

@RohanNayak:这个命令的输出是什么? 'hbase classpath'附加反引号作为前缀和后缀 –

0

我得到它的工作。你必须使用下面的罐子。

spark-shell --master yarn-client --executor-cores 10 --executor-memory 20G --num-executors 15 --driver-memory 2G --driver-class-path /usr/hdp/current/hbase-client/lib/hbase-common.jar:/usr/hdp/current/hbase-client/lib/hbase-client.jar:/usr/hdp/current/hbase-client/lib/hbase-server.jar:/usr/hdp/current/hbase-client/lib/hbase-protocol.jar:/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar:/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar --jars /usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/current/hive-client/lib/hive-hbase-handler.jar --files /etc/spark/conf/hbase-site.xml