更改hadoop map中的默认分隔符

时间:2012-10-26 05:03:12

标签: hadoop mapreduce

我对这个hadoop map reduce概念全新了

我的文件由':::'分隔,而不是由空格(“”)分隔。

默认情况下,hadoop map是否会将空格作为分隔符。如果是这样,那么如何更改它以接受用户定义的分隔符。

由于


感谢Praveen,100gods和Eric指导我。再次出现了一个问题。我将编写我的代码和错误

我想我可能做错了。所以请清除我的锥体

再次感谢

enter code here

package com;

import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;

public class WordCount {

    public static class Map extends MapReduceBase implements
            Mapper<LongWritable, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(LongWritable key, Text value,
                OutputCollector<Text, IntWritable> output, Reporter reporter)
                throws IOException {
            String line = value.toString();

            StringTokenizer tokenizer = new StringTokenizer(line);
            while (tokenizer.hasMoreTokens()) {
                word.set(tokenizer.nextToken());
                output.collect(word, one);
            }
        }
    }

    public static class Reduce extends MapReduceBase implements
            Reducer<Text, IntWritable, Text, IntWritable> {
        public void reduce(Text key, Iterator<IntWritable> values,
                OutputCollector<Text, IntWritable> output, Reporter reporter)
                throws IOException {
            int sum = 0;
            while (values.hasNext()) {
                sum += values.next().get();
            }
            output.collect(key, new IntWritable(sum));
        }
    }

    public static void main(String[] args) throws Exception {
        /*
         * JobConf conf = new JobConf(WordCount.class);
         * conf.setJobName("wordcount");
         */
        Configuration configuration = new Configuration();
        JobConf conf = new JobConf(configuration);

        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(IntWritable.class);

        conf.setMapperClass(Map.class);
        conf.setCombinerClass(Reduce.class);
        conf.setReducerClass(Reduce.class);

        conf.setInputFormat(KeyValueTextInputFormat.class);
        conf.set("key.value.separator.in.input.line", ":::");

        // conf.setInputFormat(TextInputFormat.class);
        conf.setOutputFormat(TextOutputFormat.class);
        // conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator",
        // ":::");

        FileInputFormat.setInputPaths(conf, "/home/vishal/note.txt");
        FileOutputFormat.setOutputPath(conf, new Path("/home/vishal/output"));

        JobClient.runJob(conf);
    }
}

以下是错误

12/10/30 12:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/10/30 12:23:04 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/30 12:23:04 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/30 12:23:04 WARN snappy.LoadSnappy: Snappy native library not loaded
12/10/30 12:23:04 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/30 12:23:04 INFO mapred.JobClient: Running job: job_local_0001
12/10/30 12:23:04 INFO util.ProcessTree: setsid exited with exit code 0
12/10/30 12:23:04 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@193722c
12/10/30 12:23:04 INFO mapred.MapTask: numReduceTasks: 1
12/10/30 12:23:04 INFO mapred.MapTask: io.sort.mb = 100
12/10/30 12:23:05 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/30 12:23:05 INFO mapred.MapTask: record buffer = 262144/327680
12/10/30 12:23:05 WARN mapred.LocalJobRunner: job_local_0001


`java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.LongWritable
    at com.WordCount$Map.map(WordCount.java:1)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
12/10/30 12:23:05 INFO mapred.JobClient:  map 0% reduce 0%
12/10/30 12:23:05 INFO mapred.JobClient: Job complete: job_local_0001
12/10/30 12:23:05 INFO mapred.JobClient: Counters: 0
12/10/30 12:23:05 INFO mapred.JobClient: Job Failed: NA
Exception in thread "main" java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1327)
    at com.WordCount.main(WordCount.java:84)

5 个答案:

答案 0 :(得分:1)

Hadoop中用户定义的map function将Key和Value作为输入。对于FileInputFormat,键是文件中的行偏移(通常被忽略),值是输入文件中的一行。映射器可以将输入行(也就是值)与任何分隔符分开。否则KeyValueTextInputFormat可以像其他查询中提到的那样使用。

答案 1 :(得分:0)

使用KeyValueTextInputFormat,因为它允许您选择分隔符。这将允许您使用Configuration对象设置分隔符:

conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", ":::");

答案 2 :(得分:0)

您正在使用 KeyValueTextInputFormat 并指定了分隔符。键=&gt;发送到地图作业的值对是文本类型。

尝试将Mapper类定义更改为:

public static class Map extends MapReduceBase implements
        Mapper<Text, Text, Text, IntWritable> {

           .....

    public void map(Text key, Text value,
            OutputCollector<Text, IntWritable> output, Reporter reporter)
            throws IOException {
           .....
    }
}

注意:您使用的是旧版API。尝试切换到新的,它简化了很多东西(例如: OutputCollector Reporter 已经融合到 Context )。

答案 3 :(得分:0)

当您使用KeyValuetextInputFormat.class

时,它应该是Text而不是LongWritable
public static class Map extends MapReduceBase implements
            Mapper<Text, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Text key, Text value,
                OutputCollector<Text, IntWritable> output, Reporter reporter)
                throws IOException {

采用新的Api

public class Map extends Mapper<Text, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Text key, Text value, Context context)
            throws IOException, InterruptedException {

答案 4 :(得分:-2)

您必须使用旧的Hadoop API才能使用KeyValueTextInputFormat。 以下代码将完成工作。

Configuration configuration=new Configuration();
JobConf conf=new JobConf(configuration);
conf.setInputFormat(KeyValueTextInputFormat.class);
conf.set("key.value.separator.in.input.line", ":::");

注意: KeyValueTextInputFormat已从新的hadoop API中删除。