在Hive中上传.csv数据,这是封闭格式

时间:2014-07-23 09:52:21

标签: csv hadoop hive

我的.csv文件采用封闭格式。

    "13","9827259163","0","D","2"
    "13","9827961481","0","D","2"
    "13","9827202228","0","A","2"
    "13","9827529897","0","A","2"
    "13","9827700249","0","A","2"
    "12","9883219029","0","A","2"
    "17","9861065312","0","A","2"
    "17","9861220761","0","D","2"
    "13","9827438384","0","A","2"
    "13","9827336733","0","D","2"
    "13","9827380905","0","D","2"
    "13","9827115358","0","D","2"
    "17","9861475884","0","D","2"
    "17","9861511646","0","D","2"
    "17","9861310397","0","D","2"
    "13","9827035035","0","A","2"
    "13","9827304969","0","D","2"
    "13","9827355786","0","A","2"
    "13","9827702373","0","A","2"

就像在mysql中一样,我尝试使用“included”关键字如下..

CREATE EXTERNAL TABLE dnd (ServiceAreaCode varchar(50), PhoneNumber varchar(15), Preferences varchar(15), Opstype varchar(15), PhoneType varchar(10))
ROW FORMAT DELIMITED
        FIELDS TERMINATED BY ',' ENCLOSED BY '"'
        LINES TERMINATED BY '\n'
LOCATION '/dnd';

但是,它给出了如下错误......

NoViableAltException(26@[1704:103: ( tableRowFormatMapKeysIdentifier )?])
    at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
    at org.antlr.runtime.DFA.predict(DFA.java:144)
    at org.apache.hadoop.hive.ql.parse.HiveParser.rowFormatDelimited(HiveParser.java:30427)
    at org.apache.hadoop.hive.ql.parse.HiveParser.tableRowFormat(HiveParser.java:30662)
    at org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:4683)
    at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2144)
    at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
    at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
    at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
    at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:322)
    at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:975)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1040)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
    at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:359)
    at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:456)
    at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:466)
    at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:748)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
FAILED: ParseException line 5:33 cannot recognize input near 'ENCLOSED' 'BY' ''"'' in serde properties specification

有没有办法可以直接导入这个文件?提前谢谢。

3 个答案:

答案 0 :(得分:1)

找另一种方式。解决方案是serde。请使用以下链接下载serde jar:https://github.com/downloads/IllyaYalovyy/csv-serde/csv-serde-0.9.1.jar

然后使用hive提示符执行以下步骤:

add jar path/to/csv-serde.jar;

create table dnd (ServiceAreaCode varchar(50), PhoneNumber varchar(15), Preferences varchar(15), Opstype varchar(15), PhoneType varchar(10))
row format serde 'com.bizo.hive.serde.csv.CSVSerde'
with serdeproperties(
"separatorChar" = "\,",
"quoteChar" = "\"")
stored as textfile
;

然后使用以下查询从您指定的路径加载数据:

在路径中加载本地数据' path / xyz.csv'进入表dnd; 然后运行:

select * from dnd;

答案 1 :(得分:1)

  

嘿,我在hive表中引用了csv数据:   首先下载csv serde(我下载了csv-serde-1.1.2.jar)   然后

hive>add jar /opt/hive-1.1.1/lib/csv-serde-1.1.2.jar;
Hive>create table t1(schema) row format serde 'com.bizo.hive.serde.csv.CSVSerde' with serdeproperties ("separatorChar" = ",") LOCATION '/user/hive/warehouse/dwb/ot1/';
  

然后我们必须在hive-site.xml中添加serde,如下所述,这样我们就可以从hive-shell查询表。

<property><name>hive.aux.jars.path</name><value>hdfs://master-ip:54310/hive-serde/csv-serde-1.1.2.jar</value></property>

答案 2 :(得分:0)

在hive中,我们可以使用jar文件来检索用双引号括起来的数据。

对于您的问题,请参阅此链接:

http://stackoverflow.com/questions/21156071/why-dont-hive-have-fields-enclosed-by-like-in-mysql