我是Hadoop和Java的初学者,我正在编写Map,Reduce函数将一组纬度和经度聚集成基于邻近度的组,并设置幅度(簇中的纬度,长对)和一个有代表性的lat,long对(截至目前,它是遇到的第一个lat,long对。)
这是我的代码:
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import util.hashing.*;
public class LatLong {
public static class Map extends Mapper<Object, Text, Text, Text> {
//private final static IntWritable one = new IntWritable(1);
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String[] longLatArray = line.split(",");
double longi = Double.parseDouble(longLatArray[0]);
double lat = Double.parseDouble(longLatArray[1]);
//List<Double> origLatLong = new ArrayList<Double>(2);
//origLatLong.add(lat);
//origLatLong.add(longi);
Geohash inst = Geohash.getInstance();
//encode is the library's encoding function
String hash = inst.encode(lat,longi);
//Using the first 5 characters just for testing purposes
//Need to find the right one later
int accuracy = 4;
//hash of the thing is shortened to whatever I figure out
//to be the right size of each tile
Text shortenedHash = new Text(hash.substring(0,accuracy));
Text origHash = new Text(hash);
context.write(shortenedHash, origHash);
}
}
public static class Reduce extends Reducer<Text, Text, Text, Text> {
private IntWritable totalTileElementCount = new IntWritable();
private Text latlongimag = new Text();
private Text dataSeparator = new Text();
@Override
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
int elementCount = 0;
boolean first = true;
Iterator<Text> it= values.iterator();
String lat = new String();
String longi = new String();
Geohash inst = Geohash.getInstance();
while (it.hasNext()) {
elementCount = elementCount+1;
if(first)
{
lat = Double.toString((inst.decode(it.toString()))[0]);
longi = Double.toString((inst.decode(it.toString()))[1]);
first = false;
}
@SuppressWarnings("unused")
String blah = it.next().toString();
}
totalTileElementCount.set(elementCount);
//Geohash inst = Geohash.getInstance();
String mag = totalTileElementCount.toString();
latlongimag.set(lat+","+ longi +","+mag+",");
dataSeparator.set("");
context.write(latlongimag, dataSeparator );
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setJarByClass(LatLong.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
我正在获得NPE。我不知道如何测试这个,我无法在代码中找到错误。
Hadoop错误:
java.lang.NullPointerException
at util.hashing.Geohash.decode(Geohash.java:41)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:67)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:663)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:426)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Geohash库中的decode函数返回一个双精度数组。 任何指针将不胜感激!谢谢你的时间!
EDIT1(测试后):
我已经意识到问题在于reduce函数中需要有一个it.next()。toString(),而不仅仅是it.toString(),但是当我做了这个改变并测试了,我收到了这个错误,当我在while循环条件下检查hasnext()时,我不知道它为什么会出现。
java.util.NoSuchElementException: iterate past last value
at org.apache.hadoop.mapreduce.ReduceContext$ValueIterator.next(ReduceContext.java:159)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:69)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:663)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:426)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
EDIT2(进一步测试):解决方案
我不止一次调用it.next(),并且作为迭代器,只是让它继续前进,两次,并且在最后一次迭代中,它检查条件并进入,但我&#39 ;然后两次调用it.next(),这导致问题,因为只有一个下一个元素(最后一个。)
答案 0 :(得分:1)
您仍然在toString
上拨打it
,而不是it.next()
,因此您应该更改
lat = Double.toString((inst.decode(it.toString()))[0]);
longi = Double.toString((inst.decode(it.toString()))[1]);
到
String cords = it.next().toString();
lat = Double.toString((inst.decode(cords))[0]);
longi = Double.toString((inst.decode(cords))[1]);
不要使它inst.decode(it.next().toString())
,因为它会在it.next()
次迭代中拨打while
两次。
之后请不要致电String blah = it.next().toString();
因为您将获得java.util.NoSuchElementException: iterate past last value
,原因与上述相同。
当您移除String blah = it.next().toString();
时,请记住,如果first = false
,您将永远不会输入if(first)
而永远不会致电String cords = it.next().toString();
,因此it.hasNext()
将始终返回true
1}}并且你永远不会离开while
循环,所以添加适当的条件语句。
答案 1 :(得分:0)
这意味着你的&#34;它&#34;为null或者在解码后得到null。 对它们进行空检查。