Java - Hadoop Map Reduce,输入错误,省略csv header

时间:2017-11-29 14:42:12

标签: java hadoop mapreduce

我是Hadoop和MapReduce的新手,通过实施此程序,现在出现了一系列我不理解的错误。

此程序使用具有以下结构的数据集: Barrios.csv

"Codigo de barrio";"Codigo de distrito al que pertenece";"Nombre de barrio";"Nombre acentuado del barrio";"Superficie (m2)";"Perimetro (m)"
"01";"01";"PALACIO             ";"PALACIO             ";"001471085";"005754"
"01";"02";"IMPERIAL            ";"IMPERIAL            ";"000967500";"004557"
"01";"03";"PACIFICO            ";"PACÍFICO            ";"000750065";"004005"
"01";"04";"RECOLETOS           ";"RECOLETOS           ";"000870857";"003927"
"01";"05";"EL VISO             ";"EL VISO             ";"001708046";"005269"
"01";"06";"BELLAS VISTAS       ";"BELLAS VISTAS       ";"000716261";"003443"
"01";"07";"GAZTAMBIDE          ";"GAZTAMBIDE          ";"000506596";"002969"
"01";"08";"EL PARDO            ";"EL PARDO            ";"187642916";"087125"
"01";"09";"CASA DE CAMPO       ";"CASA DE CAMPO       ";"017470075";"019233"
"01";"10";"LOS CARMENES        ";"LOS CÁRMENES        ";"001292235";"006186"
"01";"11";"COMILLAS            ";"COMILLAS            ";"000665999";"004257"
"01";"12";"ORCASITAS           ";"ORCASITAS           ";"001356371";"004664"
"01";"13";"ENTREVIAS           ";"ENTREVÍAS           ";"005996932";"011057"
"01";"14";"PAVONES             ";"PAVONES             ";"001016979";"004134"
"01";"15";"VENTAS              ";"VENTAS              ";"003198045";"008207"
"01";"16";"PALOMAS             ";"PALOMAS             ";"001128602";"004988"
"01";"17";"SAN ANDRES          ";"SAN ANDRÉS          ";"009192451";"013710"
"01";"18";"CASCO H.VALLECAS    ";"CASCO H.VALLECAS    ";"049359337";"031924"
"01";"19";"CASCO H.VICALVARO   ";"CASCO H.VICÁLVARO   ";"032924620";"033326"
"01";"20";"SIMANCAS            ";"SIMANCAS            ";"002278418";"006678"
"01";"21";"ALAMEDA DE OSUNA    ";"ALAMEDA DE OSUNA    ";"001961904";"006043"

这代表了马德里的不同地区,并显示了一系列的数据,如周长,总面积......等等。

在我的MapReduce计划中,我希望获得所有地区的promedium周边,例如,通过" Codigo de barrio",以获得来自所有地区的promedium周边" Codigo de巴里奥"等于1,然后是2 ......等等(oerimeter是最后一列值。

这是我的代码:

public class WordCount {

    private static final String SEPARATOR = ";";

        public static class BarrioMapper extends Mapper<Object, Text, IntWritable, IntWritable>{

        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
            final String[] values = value.toString().split(SEPARATOR);

            final int grupoBarrio = Integer.parseInt(values[0]);
            final int perimetro = Integer.parseInt(values[5]);  

            context.write(new IntWritable(grupoBarrio), new IntWritable(perimetro));
        }  
    }

    public static class BarrioReducer extends Reducer<IntWritable,IntWritable,IntWritable,IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(IntWritable key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
            int sum = 0;
            int contador = 0;

        for (IntWritable value : values) {
            sum += value.get();
            contador++;
        }

        if (contador > 0) {
            result.set(sum/contador);
            context.write(key, result);
        }
        }
    }

  public static void main(String[] args) throws Exception {
      Configuration conf = new Configuration();

      Job job = new Job(conf, "wordcount");
      job.setJarByClass(WordCount.class);
      job.setMapperClass(BarrioMapper.class);
      job.setCombinerClass(BarrioReducer.class);
      job.setReducerClass(BarrioReducer.class);
      job.setOutputKeyClass(IntWritable.class);
      job.setOutputValueClass(IntWritable.class);

      FileInputFormat.addInputPath(job, new Path(args[0]));
      FileOutputFormat.setOutputPath(job, new Path(args[1]));

      System.exit(job.waitForCompletion(true) ? 0 : 1);  
  }
}

我把它视为IntWritable,我的问题是当我通过hadoop传递数据和目录时,使用以下命令:

纱瓶WordCount.jar uam.WordCount Barrios.csv outPutDir

我收到此错误:

`INFO mapreduce.Job: Task Id : attempt_1487862618135_1006_m_000000_1, Status : FAILED
Error: java.lang.NumberFormatException: For input string: "Codigo de barrio"
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Integer.parseInt(Integer.java:592)
    at java.lang.Integer.parseInt(Integer.java:615)
    at uam.WordCount$BarrioMapper.map(WordCount.java:20)
    at uam.WordCount$BarrioMapper.map(WordCount.java:15)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

错误与&#34; Codigo de barrio&#34;输入数据,我不明白这意味着什么。 `

2 个答案:

答案 0 :(得分:0)

你错了分裂:

final int grupoBarrio = Integer.parseInt(values[0]);

第一行的值[0]是&#34; Codigo de barrio&#34;,你应该在csv文件中省略标题(第一行) - 它不是数值

答案 1 :(得分:0)

如错误所示,您无法将标头解析为整数。

为了跳过该值,您可以使用try-catch。

try {
    final int grupoBarrio = Integer.parseInt(values[0]);
    final int perimetro = Integer.parseInt(values[5]);  

    context.write(new IntWritable(grupoBarrio), new IntWritable(perimetro));
} (NumberFormatException e) { }

或者您应该使用没有标题的HDFS文件覆盖HDFS文件。