查询包含水槽流的外部表时发生Hive错误

时间:2015-10-06 11:09:34

标签: hadoop twitter hive flume

在CDH 5.4上,我正在尝试使用以下方法在Twitter分析上创建一个演示:

  1. 将推文捕获到HDFS文件夹的Flume
  2. 使用Hive-Serde查询推文的Hive
  3. 步骤1成功。我可以看到这些推文被捕获并正确定向到所需的HDFS文件夹。我发现首先创建了一个临时文件,然后将其转换为永久文件:

    -rw-r--r--   3 root hadoop       7548 2015-10-06 06:39 /user/flume/tweets/FlumeData.1444127932782
    -rw-r--r--   3 root hadoop      10034 2015-10-06 06:39 /user/flume/tweets/FlumeData.1444127932783.tmp
    

    我正在使用下表声明:

    CREATE EXTERNAL TABLE tweets(
        id bigint, 
        created_at string, 
        lang string, 
        source string, 
        favorited boolean, 
        retweet_count int, 
        retweeted_status 
        struct<text:string,user:struct<screen_name:string,name:string>>,
        entities struct<urls:array<struct<expanded_url:string>>,
        user_mentions:array<struct<screen_name:string,name:string>>,
        hashtags:array<struct<text:string>>>,
        text string,
        user struct<location:string,geo_enabled:string,screen_name:string,name:string,friends_count:int,followers_count:int,statuses_count:int,verified:boolean,utc_offset:int,time_zone:string>,
        in_reply_to_screen_name string)
    ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
    STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
    OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
    LOCATION 'hdfs://master.ds.com:8020/user/flume/tweets';
    

    但是当我查询这个表时,我收到以下错误:

    hive> select count(*) from tweets;
    
    Ended Job = job_1443526273848_0140 with errors
    ...
    Diagnostic Messages for this Task:
    Error: java.io.IOException: java.lang.reflect.InvocationTargetException
            at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreation
            ... 11 more
    
    Caused by: java.io.FileNotFoundException: File does not exist: /user/flume/tweets/FlumeData.1444128601078.tmp
            at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
            ...
    
    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
    MapReduce Jobs Launched:
    
    Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 1.19 sec   HDFS Read: 10492 HDFS Write: 0 FAIL
    

    我认为问题与临时文件有关,而Hive查询产生的map-reduce作业没有被读取。可以有一些解决方法或配置更改来成功处理这个问题吗?

1 个答案:

答案 0 :(得分:0)

我有相同的经验,我通过向我的flume配置文件添加下面的hdfs接收器配置来解决它 some_agent.hdfssink.hdfs.inUsePrefix = . hdfs.inUseSuffix = .temp

希望它对你有所帮助。