当我尝试使用Eclipse在Spark Java中运行一个简单的字数时,我在一个新的弹出Java虚拟机启动器窗口中收到Java错误,该窗口显示 -
发生了Java异常。
java -version
Java Virtual Machine Launcher
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
以下是代码:
package com.fd.spark;
import java.util.Arrays;
import java.util.Iterator;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
public class SparkWordCount {
public static void main(String[] args) throws Exception {
String inputFile = "/Spark/inp1";
String outputFile = "/Spark/out1";
// Create a Java Spark Context.
SparkConf conf = new SparkConf().setAppName("wordCount").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
// Load our input data.
JavaRDD<String> input = sc.textFile(inputFile);
// Split up into words.
JavaRDD<String> words = input.flatMap(
new FlatMapFunction<String, String>() {
public Iterator<String> call(String x) {
return (Iterator<String>) Arrays.asList(x.split(" "));
}});
// Transform into word and count.
JavaPairRDD<String, Integer> counts = words.mapToPair(
new PairFunction<String, String, Integer>(){
public Tuple2<String, Integer> call(String x){
return new Tuple2(x, 1);
}}).reduceByKey(new Function2<Integer, Integer, Integer>(){
public Integer call(Integer x, Integer y){ return x + y;}});
// Save the word count back out to a text file, causing evaluation.
counts.saveAsTextFile(outputFile);
}
}
答案 0 :(得分:1)
在Java 8中使用以下内容,它将起作用。以下是我使用lamda函数在Java 8上的代码片段。
public class WordCount {
public static void main(
String[] args) {
SparkConf sparkConf = new SparkConf().setMaster("local").setAppName("JD Word Counter");
JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
JavaRDD<String> inputFile = sparkContext.textFile(*Path_To_TXT_file*);
JavaRDD<String> wordsFromFile = inputFile.flatMap(content -> Arrays.asList(content.split(" ")).iterator());
JavaPairRDD<String,Integer> countData = wordsFromFile.mapToPair(t -> new Tuple2<String,Integer>(t,1)).reduceByKey((x, y) -> x + y);
countData.collect().forEach(t -> System.out.println(t._1+" : "+t._2));
}
}