根据我们所有商店的产品类别查找销售明细

时间:2018-10-18 15:10:31

标签: java hadoop mapreduce

我有一个销售文件,其中包含商店名称,位置,销售价格,产品名称等信息。文件格式如下所示,

2012-01-01  09:00   San Jose    Men's Clothing  214.05  Amex
2012-01-01  09:00   Fort Worth  Women's Clothing    153.57  Visa
2012-01-01  09:00   San Diego   Music   66.08   Cash
2012-01-01  09:00   Pittsburgh  Pet Supplies    493.51  Discover
2012-01-01  09:00   Omaha   Children's Clothing 235.63  MasterCard
2012-01-01  09:00   Stockton    Men's Clothing  247.18  MasterCard  

我想写一份Map-reduce作业来查找我们所有商店中按产品类别划分的销售明细。我的代码(包括Mapper和reducer)在下面提供,

public final class P1Q1 {


    public static final class P1Q1Map extends Mapper<LongWritable, Text, Text, DoubleWritable> {

        private final Text word = new Text();

        public final void map(final LongWritable key, final Text value, final Context context)
                throws IOException, InterruptedException {

            final String line = value.toString();
            final String[] data = line.trim().split("\t");

            if (data.length == 6) {

                final String product = data[3];
                final double sales = Double.parseDouble(data[4]);

                word.set(product);
                context.write(word, new DoubleWritable(sales));
            }
        }
    }


    public static final class P1Q1Reduce extends Reducer<Text, DoubleWritable, Text, DoubleWritable> {

        public final void reduce(final Text key, final Iterable<DoubleWritable> values, final Context context)
                throws IOException, InterruptedException {

            double sum = 0.0;

            for (final DoubleWritable val : values) {
                sum += val.get();
            }

            context.write(key, new DoubleWritable(sum));
        }
    }


    public final static void main(final String[] args) throws Exception {


        final Configuration conf = new Configuration();

        final Job job = new Job(conf, "P1Q1");
        job.setJarByClass(P1Q1.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(DoubleWritable.class);

        job.setMapperClass(P1Q1Map.class);
        job.setCombinerClass(P1Q1Reduce.class);
        job.setReducerClass(P1Q1Reduce.class);

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.waitForCompletion(true);
    }
}

代码提供的答案不正确,并且与Udacity结果不匹配。

任何人都知道这是否是正确的想法以及如何做到这一点吗?

注意

  1. 我在输出文件中得到了完全错误的结果,

    Baby    5.749180844000035E7
    Books   5.745075790999787E7
    CDs 5.741075304000156E7
    Cameras 5.7299046639999785E7
    Children's Clothing 5.762482094000117E7
    Computers   5.7315406319999576E7
    Consumer Electronics    5.745237412999948E7
    Crafts  5.7418154499999225E7
    DVDs    5.764921213999939E7
    Garden  5.7539833110000335E7
    Health and Beauty   5.748158956000019E7
    Men's Clothing  5.76212790400011E7
    Music   5.749548970000038E7
    Pet Supplies    5.71972502400004E7
    Sporting Goods  5.7599085889999546E7
    Toys    5.746347710999843E7
    Video Games 5.7513165580000155E7
    Women's Clothing    5.74344489699993E7
    
  2. 我认为,如果注释掉组合器,那就可以了。我做到了,并且没有改变结果。

     job.setCombinerClass(P1Q1Reduce.class);
    
  3. 我提供了代码,purchases.txt文件链接为here。如果有人尝试解决问题并成功提交Udacity,请告诉我。

1 个答案:

答案 0 :(得分:1)

在大多数情况下,我想说您的代码看起来不错,并且Combiner只是一种优化,因此排除它应该产生与包含它相同的输出。


我写了自己的MR,并得到了给定输入的输出

Children's Clothing 235.63
Men's Clothing  461.23
Music   66.08
Pet Supplies    493.51
Women's Clothing    153.57

很明显,如果您有成千上万的商店,那么您将获得数百万个货币单位,如输出所示。

代码

@Override
public int run(String[] args) throws Exception {
    Configuration conf = getConf();
    Job job = Job.getInstance(conf, APP_NAME);
    job.setJarByClass(StoreSumRunner.class);

    job.setMapperClass(TokenizerMapper.class);
    job.setReducerClass(CurrencyReducer.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(DoubleWritable.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    return job.waitForCompletion(true) ? 0 : 1;
}

static class TokenizerMapper extends Mapper<LongWritable, Text, Text, DoubleWritable> {

    private final Text key = new Text();
    private final DoubleWritable sales = new DoubleWritable();

    @Override
    protected void map(LongWritable offset, Text value, Context context) throws IOException, InterruptedException {
        final String line = value.toString();
        final String[] data = line.trim().split("\\s\\s+");

        if (data.length < 6) {
            System.err.printf("mapper: not enough records for %s%n", Arrays.toString(data));
            return;
        }

        key.set(data[3]);

        try {
            sales.set(Double.parseDouble(data[4]));
            context.write(key, sales);
        } catch (NumberFormatException ex) {
            System.err.printf("mapper: invalid value format %s%n", data[4]);
        }
    }
}

static class CurrencyReducer extends Reducer<Text, DoubleWritable, Text, Text> {
    private final Text output = new Text();
    private final DecimalFormat df = new DecimalFormat("#.00");

    @Override
    protected void reduce(Text date, Iterable<DoubleWritable> values, Context context) throws IOException, InterruptedException {
        double sum = 0;
        for (DoubleWritable value : values) {
            sum += value.get();
        }
        output.set(df.format(sum));
        context.write(date, output);
    }
}
相关问题