从mapreduce中读取配置单元表

时间:2013-04-24 12:01:20

标签: hadoop mapreduce hive

我目前正在使用mapreduce程序来查找两个hive表之间的区别。 我的hive表在一列或多列上分区。所以文件夹名称包含分区列的值。

有没有办法阅读hive分区表。

可以在mapper中读取吗?

2 个答案:

答案 0 :(得分:3)

由于默认情况下将基础HDFS数据组织在分区的配置单元表中

 table/root/folder/x=1/y=1
 table/root/folder/x=1/y=2
 table/root/folder/x=2/y=1
 table/root/folder/x=2/y=2....,

您可以在驱动程序中构建每个输入路径,并通过多次调用FileInputFormat.addInputPath(作业,路径)来添加它们。每个文件夹路径调用一次。

粘贴下面的示例代码。注意如何将路径添加到MyMapper.class。在此示例中,我使用的是MultipleInputs API.Table由'part'和'xdate'分区。

public class MyDriver extends Configured implements Tool {
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("mapred.compress.map.output", "true");
        conf.set("mapred.output.compression.type", "BLOCK"); 

        Job job = new Job(conf);
        //set up various job parameters
        job.setJarByClass(MyDriver.class);
        job.setJobName(conf.get("job.name"));
        MultipleInputs.addInputPath(job, new Path(conf.get("root.folder")+"/xdate="+conf.get("start.date")), TextInputFormat.class, OneMapper.class);
        for (Path path : getPathList(job,conf)) {
            System.out.println("path: "+path.toString());
            MultipleInputs.addInputPath(job, path, Class.forName(conf.get("input.format")).asSubclass(FileInputFormat.class).asSubclass(InputFormat.class), MyMapper.class);
        }
        ...
        ...
        return job.waitForCompletion(true) ? 0 : -2;

    }

    private static ArrayList<Path> getPathList(Job job, Configuration conf) {
        String rootdir = conf.get("input.path.rootfolder");
        String partlist = conf.get("part.list");
        String startdate_s = conf.get("start.date");
        String enxdate_s = conf.get("end.date");
        ArrayList<Path> pathlist = new ArrayList<Path>();
        String[] partlist_split = partlist.split(",");
        SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
        Date startdate_d = null;
        Date enxdate_d = null;
        Path path = null;
        try {
            startdate_d = sdf.parse(startdate_s);
            enxdate_d = sdf.parse(enxdate_s);
            GregorianCalendar gcal = new GregorianCalendar();
            gcal.setTime(startdate_d);
            Date d = null;
            for (String part : partlist_split) {
                gcal.setTime(startdate_d);
                do {
                    d = gcal.getTime();
                    FileSystem fs = FileSystem.get(conf);
                    path = new Path(rootdir + "/part=" + part + "/xdate="
                            + sdf.format(d));
                    if (fs.exists(path)) {
                        pathlist.add(path);
                    }
                    gcal.add(Calendar.DAY_OF_YEAR, 1);
                } while (d.before(enxdate_d));
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        return pathlist;
    }

    public static void main(String[] args) throws Exception {
        int res = ToolRunner.run(new Configuration(), new MyDriver(), args);
        System.exit(res);
    }
}

答案 1 :(得分:0)

是的,可以在Mapper中轻松读取它。该答案基于@Daniel Koverman提到的想法。

通过将Context对象传递给Mapper.map(),您可以通过这种方式获取文件拆分路径

// this gives you the path plus offsets hdfs://.../tablename/partition1=20/partition2=ABC/000001_0:0+12345678
context.ctx.getInputSplit().toString();

// or this gets you the path only
((FileSplit)ctx.getInputSplit()).getPath();

这是解析出实际分区值的更完整的解决方案:

class MyMapper extends Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {

    // regex to parse out the /partitionName=partitionValue/ pairs
    private static Pattern partitionRegex = Pattern.compile("(?<=/)(?<name>[_\\-\\w]+)=(?<value>[^/]*)(?=/)");

    public static String parsePartitionValue(String path, String partitionName) throws IllegalArgumentException{
        Matcher m = partitionRegex.matcher(path);
        while(m.find()){
            if(m.group("name").equals(partitionName)){
                return m.group("value");
            }
        }
        throw new IllegalArgumentException(String.format("Partition [%s] not found", partitionName));
    }

    @Override
    public void map(KEYIN key, VALUEIN v, Context ctx) throws IOException, InterruptedException {
        String partitionVal = parsePartitionValue(ctx.getInputSplit().toString(), "my_partition_col");
   }
}