我使用elasticsearch-hadoop-2.3.2.jar集成了弹性搜索和hadoop。我使用beeline查询我的hive表。当我创建或插入普通表时,它工作正常。但是,当我创建一个外部表使用弹性搜索,如下面的
CREATE EXTERNAL TABLE flogs2 (
name STRING,
city STRING,
status STRING)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.nodes' = '192.168.18.79','es.port' = '9200','es.index.auto.create' = 'true', 'es.resource' = 'mylog/log', 'es.query' = '?q=*');
创建了表。但是,当我在下面插入数据时,
INSERT OVERWRITE TABLE flogs2 SELECT s.name,s.city,s.status FROM logs S;
我被困在以下几行
0: jdbc:hive2://192.168.18.197:10000> INSERT OVERWRITE TABLE flogs2 SELECT s.name,s.city,s.status FROM logs s;
INFO : Number of reduce tasks is set to 0 since there's no reduce operator
INFO : number of splits:1
INFO : Submitting tokens for job: job_1464067651503_0014
INFO : The url to track the job: http://vasanthakumar-virtual-machine:8088/proxy/application_1464067651503_0014/
INFO : Starting Job = job_1464067651503_0014, Tracking URL = http://vasanthakumar-virtual-machine:8088/proxy/application_1464067651503_0014/
INFO : Kill Command = /home/vasanthakumar/Desktop/software/hadoop-2.7.1/bin/hadoop job -kill job_1464067651503_0014
INFO : Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
INFO : 2016-05-24 16:26:51,866 Stage-0 map = 0%, reduce = 0%
INFO : 2016-05-24 16:27:52,372 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.48 sec
INFO : 2016-05-24 16:28:52,498 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.48 sec
INFO : 2016-05-24 16:29:52,562 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.48 sec
INFO : 2016-05-24 16:30:52,884 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.48 sec
INFO : 2016-05-24 16:31:53,103 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 1.48 sec
注意: 我曾经尝试过HiveCLI和Beeline 2.增加了我的记忆空间 3.正常的hive查询工作正常
请帮我摆脱这个。