如何计算Spark Java API中的总薪水

时间:2016-12-12 05:09:06

标签: java apache-spark

我是SPARK的新手,我正在研究Spark Java API。我有一个文件

1201, John, 2500
1202, Alex, 2800
1203, amith, 3900
1204, javed, 2300
1205, Saminga, 23000

现在我需要计算总薪水并将其存储在一个文件中。由于我是MR / spark Java API的新手,我无法弄明白。请允许任何人帮助我解决这个问题。

示例代码:

import java.util.Arrays;
import java.util.Comparator;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.DoubleFunction;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;

import scala.Tuple2;
public class SalarySum {

    public static void main(String[] args)
    {
        final int k=0;

        if(args.length<1)
        {
            System.out.println("Please provide  input files for processing");
            System.exit(0);
        }
        else
        {
            String inputFile=args[0];
            String outputFile=args[1];
            SparkConf config=new SparkConf().setAppName("Total Salary  Example");
            JavaSparkContext spartContext=new JavaSparkContext(config);

            JavaRDD<String> inputReader=spartContext.textFile(inputFile);

            JavaRDD<String> map=inputReader.flatMap(new FlatMapFunction<String, String>() {
                @Override
                public Iterable<String> call(String t) throws Exception
                {
                    System.out.println("Flat Map Data: "+t);
                    return Arrays.asList(t);
                }
            });

            JavaPairRDD<Integer, Iterable<String>> group=map.groupBy(new Function<String, Integer>() {

                @Override
                public Integer call(String s2) throws Exception
                {
                    String data=s2.split(",")[2].trim();
                    int value=Integer.parseInt(data);
                    System.out.println("Tuple: "+s2 +" : "+data);
                    return value;
                }
            });


            JavaPairRDD<Integer, Integer> totalSaleData = group.flatMapValues(new Function<Iterable<String>, Iterable<Integer>>() {

                @Override
                public Iterable<Integer> call(Iterable<String> v1)
                        throws Exception 
                {
                    int count=0;
                    for(String str:v1)
                    {
                        String data=str.split(",")[2].trim();
                        int value=Integer.parseInt(data);
                        System.out.println("Iterating Values : "+str);
                        System.out.println("Count: "+count);
                        count =count+value;
                    }
                    return Arrays.asList(count);
                }
            });

            totalSaleData.saveAsTextFile(outputFile);

        }
    }

}

1 个答案:

答案 0 :(得分:1)

您可以使用Spark 1.6执行此操作。

public class SparkSalarySum {
public static void main(String[] args) {
    SparkConf conf = new SparkConf().setAppName("SparkSalarySum").setMaster("local[2]");
    JavaSparkContext jsc = new JavaSparkContext(conf);
    JavaRDD<String> lines = jsc.textFile("c:\\temp\\test.txt");
    JavaPairRDD<String, Integer> total = lines.flatMap(line -> Arrays.asList(Integer.parseInt(line.split(",")[2].trim())))
            .mapToPair(sal -> new Tuple2<String, Integer>("Total", sal))
            .reduceByKey((x, y) ->  x +  y);
    total.foreach(data -> {
        System.out.println(data._1()+"-"+data._2());
    });
    total.coalesce(1).saveAsTextFile("c:\\temp\\testOut");
    jsc.stop();
  }
}