我正在尝试使用此格式的shell脚本从txt文件在Hive中创建一个表。
我的t_cols.txt包含以下数据:
id string, name string, city string, lpd timestamp
我想创建一个hive表,其列应来自此文本文件。
这是我的shell脚本的样子:
table_cols=`cat t_cols.txt`
hive --hiveconf t_name=${table_cols} -e 'create table leap_frog_snapshot.LINKED_OBJ_TRACKING (\${hiveconf:t_name}) stored as orc tblproperties ("orc.compress"="SNAPPY");'
这不管怎么样。
我收到以下错误:
Logging initialized using configuration in file:/etc/hive/2.4.3.0-227/0/hive-log4j.properties
NoViableAltException(307@[])
at org.apache.hadoop.hive.ql.parse.HiveParser.type(HiveParser.java:38618)
at org.apache.hadoop.hive.ql.parse.HiveParser.colType(HiveParser.java:38375)
at org.apache.hadoop.hive.ql.parse.HiveParser.columnNameType(HiveParser.java:38059)
at org.apache.hadoop.hive.ql.parse.HiveParser.columnNameTypeList(HiveParser.java:36183)
at org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5222)
at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2648)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1658)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1117)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:432)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1202)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1250)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1139)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1129)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:314)
at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:412)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:428)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:717)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: ParseException line 1:60 cannot recognize input near ')' 'stored' 'as' in column type
我错过了什么吗? 如果这不是正确的做法,那么实现这一目标的正确方法是什么?
答案 0 :(得分:0)
这是因为cat只在空格之前给出变量中的第一个单词。
以下应该工作
#!/bin/bash
hive --hiveconf t_name="`cat t_cols.txt`" -e 'create table leap_frog_snapshot.LINKED_OBJ_TRACKING (${hiveconf:t_name}) stored as orc tblproperties ("orc.compress"="SNAPPY") ; '