使用Hive将数据选择到Hadoop中

时间:2013-06-10 08:09:41

标签: hadoop hive hdfs

我使用以下命令在Hive中创建了一个表:

CREATE TABLE tweet_table(
    tweet STRING
)
ROW FORMAT
    DELIMITED
        FIELDS TERMINATED BY '\n'
        LINES TERMINATED BY '\n'

我插入一些数据:

LOAD DATA LOCAL INPATH 'data.txt' INTO TABLE tweet_table

data.txt:

data1
data2
data3data4
data5

命令select * from tweet_table返回:

data1
data2
data3data4
data5

但是select tweet from tweet_table给了我:

java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:230)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:338)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at java.beans.XMLDecoder.readObject(XMLDecoder.java:250)
    at org.apache.hadoop.hive.ql.exec.Utilities.deserializeMapRedWork(Utilities.java:542)
    at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:222)
    ... 7 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

如果数据存储在正确的表格中,但不在tweet字段中,那么为什么?

1 个答案:

答案 0 :(得分:1)

针对Apache Hive 1.2.1进行测试,看来这种行为不再以完全相同的方式进行重新编译。但是,最初的问题很可能与在'\n'语句中使用相同字符(CREATE TABLE)作为字段终止符和行终止符有关。 / p>

CREATE TABLE tweet_table(
    tweet STRING
)
ROW FORMAT
    DELIMITED
        FIELDS TERMINATED BY '\n'
        LINES TERMINATED BY '\n'

这不能产生可预测的结果,因为您已经说过'\n'可以指示字段的结尾或整行的结尾。

当我测试Apache Hive 1.2.1时会发生这种情况。 data.txt的内容是3行数据,每行包含2列,字段由制表符'\t'分隔,行以'\n'分隔。

key1    value1
key2    value2
key3    value3

让我们测试字段终结符和行终止符都设置为'\n'

hive> CREATE TABLE data_table(
    >     key STRING,
    >     value STRING
    > )
    > ROW FORMAT
    >     DELIMITED
    >         FIELDS TERMINATED BY '\n'
    >         LINES TERMINATED BY '\n';
OK
Time taken: 2.322 seconds
hive> LOAD DATA LOCAL INPATH 'data.txt' INTO TABLE data_table;
Loading data to table default.data_table
Table default.data_table stats: [numFiles=1, totalSize=36]
OK
Time taken: 2.273 seconds
hive> SELECT * FROM data_table;
OK
key1    value1  NULL
key2    value2  NULL
key3    value3  NULL
Time taken: 1.387 seconds, Fetched: 3 row(s)
hive> SELECT key FROM data_table;
OK
key1    value1
key2    value2
key3    value3
Time taken: 1.254 seconds, Fetched: 3 row(s)
hive> SELECT value FROM data_table;
OK
NULL
NULL
NULL
Time taken: 1.384 seconds, Fetched: 3 row(s)

我们可以看到它将每个"key\tvalue"解释为表定义中的key,并假设没有为value指定任何内容。这是一个有效的解释,因为表定义表明字段将由'\n'分隔,并且输入中没有'\n',直到键和值之后。

现在让我们重复相同的测试,将字段终止符设置为'\t',将行终止符设置为'\n'

hive> CREATE TABLE data_table(
    >     key STRING,
    >     value STRING
    > )
    > ROW FORMAT
    >     DELIMITED
    >         FIELDS TERMINATED BY '\t'
    >         LINES TERMINATED BY '\n';
OK
Time taken: 2.247 seconds
hive> LOAD DATA LOCAL INPATH 'data.txt' INTO TABLE data_table;
Loading data to table default.data_table
Table default.data_table stats: [numFiles=1, totalSize=36]
OK
Time taken: 2.244 seconds
hive> SELECT * FROM data_table;
OK
key1    value1
key2    value2
key3    value3
Time taken: 1.308 seconds, Fetched: 3 row(s)
hive> SELECT key FROM data_table;
OK
key1
key2
key3
Time taken: 1.376 seconds, Fetched: 3 row(s)
hive> SELECT value FROM data_table;
OK
value1
value2
value3
Time taken: 1.281 seconds, Fetched: 3 row(s)

这次我们看到了预期的结果。