我想在MapReduce中开发一个程序,它从.tbl文件中获取cust_key和balance值。我已将2个值连接成字符串然后将它发送到Reducer,所以我将计算cust_key并找到平均余额每段。这就是我将段添加为键的原因。
我想拆分字符串并获取2个值,以便计算cust键并对天平求和以找到平均值。但是splitted array [0]给出了整个字符串,而不是字符串的第一个值.Also splitted array [1]抛出ArrayoutofBounds异常。我希望它很清楚。
代码在
之下public class MapReduceTest {
public static class TokenizerMapper extends Mapper<Object, Text, Text, Text>{
private Text segment = new Text();
private Text word = new Text();
private float balance = 0;
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String[] line = value.toString().split("\\|");
balance = Float.parseFloat(line[5]);
String cust_key = line[1];
int nation = Integer.parseInt(line[3]);
if((balance > 8000) && ( nation < 15) && (nation > 1)){
segment.set(line[6]);
//word.set(cust_key+","+balance);
word.set(cust_key+","+balance);
context.write(segment,word);
}
}
}
public static class AvgReducer extends Reducer<Text,Text,Text,Text> {
Text val = new Text();
public void reduce(Text key, Iterable<Text> values,Context context) throws IOException, InterruptedException {
String cust_key = "";
float avg,sum = 0;
int count = 0;
for(Text v : values){
String[] a = v.toString().trim().split(",");
cust_key +=a[0];
}
val.set(cust_count);
context.write(key, val);
}
}
输入数据
8794|Customer#000008794|6dnUgJZGX73Kx1idr6|18|28-434-484-9934|7779.30|HOUSEHOLD|deposits detect furiously even requests. furiously ironic packages are slyly into th
8795|Customer#000008795|oA1cLUtWOAIFz5Douypbq1jHv glSE|9|19-829-732-8102|9794.80|BUILDING|totes. blithely unusual theodolites integrate carefully ironic foxes. unusual excuses cajole carefully carefully fi
8796|Customer#000008796|CzCzpV7SDojXUzi4165j,xYJuRv wZzn grYsyZ|24|34-307-411-6825|4323.03|AUTOMOBILE|s. pending, bold accounts above the sometimes express accounts
8797|Customer#000008797|TOWDryHNNqp8bvgMW6 FAhRoLyG1ldu2bHcJCM6|2|12-517-522-5820|219.78|FURNITURE|ly bold pinto beans can nod blithely quickly regular requests. fluffily even deposits ru
8798|Customer#000008798|bIegyozQ5kzprN|15|25-472-647-6270|6832.96|AUTOMOBILE|es-- silent instructions nag blithely
堆栈跟踪
java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at MapReduceTest$AvgReducer.reduce(MapReduceTest.java:69)
at MapReduceTest$AvgReducer.reduce(MapReduceTest.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
17/04/12 18:40:33 INFO mapreduce.Job: Job job_local806960399_0001 running in uber mode : false
17/04/12 18:40:33 INFO mapreduce.Job: map 100% reduce 0%
17/04/12 18:40:33 INFO mapreduce.Job: Job job_local806960399_0001 failed with state FAILED due to: NA
17/04/12 18:40:33 INFO mapreduce.Job: Counters: 35
更新
减速
public static class AvgReducer extends Reducer<Text,Text,Text,Text> {
Logger log = Logger.getLogger(AvgReducer.class.getName());
public void reduce(Text key, Iterable<Text> values,Context context) throws IOException, InterruptedException {
float sumBalance=0,avgBalance = 0;
int cust_count = 1;
for(Text v : values){
String[] a = v.toString().trim().split(",");
//c2 += " i "+i+" "+a[0]+"\n";
sumBalance +=Float.parseFloat(a[a.length-1]);
cust_count++;
}
avgBalance = sumBalance / cust_count;
context.write(key,new Text(avgBalance+" "+cust_count));
}
}
栈跟踪
java.lang.Exception: java.lang.NumberFormatException: For input string: "8991.715 289"
提前致谢。
答案 0 :(得分:2)
Pig运行MapReduce(如果以这种方式配置)。它也比使用MapReduce更加清晰,并且安装在主要的Hadoop发行版上。
A = LOAD 'test.txt' USING PigStorage('|') AS (f1:int,customer_key:chararray,f3:chararray,nation:int,f5:chararray,balance:float,segment:chararray,f7:chararray);
filtered = FILTER A BY balance > 8000 AND (nation > 1 AND nation < 15);
X = FOREACH filtered generate segment,customer_key,balance;
输出
\d X
(BUILDING,Customer#000008795,9794.8)
不确定你真的想要这里的平均值,只有一个元素,但你需要在GROUP BY
和segment
上customer_key
,然后你就可以轻松使用{{ 3}}。
如果您对SQL更熟悉,那么Hive也可能是一种更直接的方法。
(除非另有配置,否则也通过MapReduce运行)
CREATE EXTERNAL TABLE IF NOT EXISTS records (
f1 INT,
customer_key STRING,
f3 STRING,
nation INT,
f5 STRING,
balance FLOAT,
f8 STRING
) ROW FORMAT DELIMETED
FIELDS TERMINATED BY '|'
LOCATION 'hdfs://path/test.txt';
然后,它就像这样
SELECT segment, customer_key, AVG(balance)
FROM records
WHERE balance > 8000 AND nation > 1 AND nation < 15
GROUP BY segment, customer_key;
我将进入Apache Spark示例,但Spark SQL基本上就是Hive查询。
答案 1 :(得分:1)
如果您真的想在Java MapReduce中尝试此操作,请尝试标准化输入并快速捕获错误。
返回以丢弃有问题的记录
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
float balance = 0.0;
String custKey = "";
String segment = "";
int nation = 0;
String[] line = value.toString().split("\\|");
if (line.length < 7) {
System.err.println("map: Not enough records");
return;
}
cust_key = line[1];
try {
nation = Integer.parseInt(line[3]);
balance = Float.parseFloat(line[5]);
} catch (NumberFormatException e) {
e.printStackTrace();
return;
}
if(balance > 8000 && (nation < 15 && nation > 1)){
segment.set(line[6]);
word.set(cust_key + "\t" + balance);
context.write(segment,word);
}
}
然后,如果您尝试编写类似的输出记录,理想情况下,reducer应生成相同的格式
public void reduce(Text key, Iterable<Text> values,Context context) throws IOException, InterruptedException {
float sumBalance=0
int count = 0;
for(Text v : values){
String[] a = v.toString().trim().split("\t");
if (a.length < 2) {
System.err.println("reduce: Not enough records");
continue;
}
sumBalance += Float.parseFloat(a[1]);
count++;
}
float avgBalance = count <= 1 ? sumBalance : sumBalance / count;
context.write(key,new Text(avgBalance + "\t" + count));
}
(代码未经测试)