2017-09-24 55 views
0

我正在尝试访问配置单元CLI。但是,它无法从以下AccessControl问题开始。 足够强烈,我能够查询来自Hue的配置单元数据而没有AccessControl问题。但是,配置单元CLI不起作用。 我在MapR群集上。无法启动Hive CLI Hadoop(MapR)

任何帮助,非常感谢。

[<user_name>@<edge_node> ~]$ hive 
SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/opt/mapr/hive/hive-2.1/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 
Logging initialized using configuration in file:/opt/mapr/hive/hive-2.1/conf/hive-log4j2.properties Async: true 
2017-09-23 23:52:08,988 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-api-jdo-4.2.4.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-api-jdo-4.2.1.jar." 
2017-09-23 23:52:08,993 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-core-4.1.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-core-4.1.6.jar." 
2017-09-23 23:52:09,004 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-rdbms-4.1.19.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-rdbms-4.1.7.jar." 
2017-09-23 23:52:09,038 INFO [main] DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 
2017-09-23 23:52:09,039 INFO [main] DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 
2017-09-23 23:52:14,2251 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:2172 Thread: 20235 mkdirs failed for /user/<user_name>, error 13 
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: User <user_name>(user id 50005586) has been denied access to create <user_name> 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:617) 
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) 
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:646) 
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 
Caused by: org.apache.hadoop.security.AccessControlException: User <user_name>(user id 50005586) has been denied access to create <user_name> 
at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:1256) 
at com.mapr.fs.MapRFileSystem.mkdirs(MapRFileSystem.java:1276) 
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913) 
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:823) 
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:917) 
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:616) 
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:256) 
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.beginOpen(TezSessionState.java:220) 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:614) 
... 10 more 

回答

1

错误提示您定义了在文件系统中创建目录的访问权限。这很可能是/user/<user name>,这需要由HDFS/MapR FS超级用户添加。

我可以查询从顺蜂房数据而不AccessControl的

色相通过节俭和HiveServer2通信。

Hive CLI绕过HiveServer2并被弃用。

您应该使用直线代替。

beeline -n $(whoami) -u jdbc:hive2://hiveserver:10000/default 

如果你在一个kerberized集群,那么你需要一些额外的选项。