2017-05-23 44 views
0

我正尝试使用此格式的shell脚本从txt文件在Hive中创建表。使用文本文件中的列创建配置单元表

我t_cols.txt有数据如下:

id string, name string, city string, lpd timestamp 

我想创建蜂巢表其列应该从这个文本文件中来。

这是我的shell脚本看起来像:

table_cols=`cat t_cols.txt` 
hive --hiveconf t_name=${table_cols} -e 'create table leap_frog_snapshot.LINKED_OBJ_TRACKING (\${hiveconf:t_name}) stored as orc tblproperties ("orc.compress"="SNAPPY");' 

这不是莫名其妙地工作。

我收到以下错误:

Logging initialized using configuration in file:/etc/hive/2.4.3.0-227/0/hive-log4j.properties 
NoViableAltException([email protected][]) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.type(HiveParser.java:38618) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.colType(HiveParser.java:38375) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.columnNameType(HiveParser.java:38059) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeList(HiveParser.java:36183) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5222) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2648) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1658) 
     at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1117) 
     at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202) 
     at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) 
     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:432) 
     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) 
     at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1202) 
     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1250) 
     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1139) 
     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1129) 
     at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) 
     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) 
     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379) 
     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:314) 
     at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:412) 
     at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:428) 
     at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:717) 
     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684) 
     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:498) 
     at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
     at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 
FAILED: ParseException line 1:60 cannot recognize input near ')' 'stored' 'as' in column type 

我缺少的东西? 如果这不是正确的做法,那么实现这一目标的正确方法是什么?

回答

0

这是因为猫只在变量的空格前给出第一个单词。

下面应该工作

#!/bin/bash 

hive --hiveconf t_name="`cat t_cols.txt`" -e 'create table leap_frog_snapshot.LINKED_OBJ_TRACKING (${hiveconf:t_name}) stored as orc tblproperties ("orc.compress"="SNAPPY") ; ' 
相关问题