为什么此代码会产生此异常?我该如何避免
SparkConf conf = new SparkConf().setAppName("startingSpark").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
List<Tuple2<Integer, Integer>> visitsRaw = new ArrayList<>();
visitsRaw.add(new Tuple2<>(4, 18));
visitsRaw.add(new Tuple2<>(6, 4));
visitsRaw.add(new Tuple2<>(10, 9));
List<Tuple2<Integer, String>> usersRaw = new ArrayList<>();
usersRaw.add(new Tuple2<>(1, "John"));
usersRaw.add(new Tuple2<>(2, "Bob"));
usersRaw.add(new Tuple2<>(3, "Alan"));
usersRaw.add(new Tuple2<>(4, "Doris"));
usersRaw.add(new Tuple2<>(5, "Marybelle"));
usersRaw.add(new Tuple2<>(6, "Raquel"));
JavaPairRDD<Integer, Integer> visits = sc.parallelizePairs(visitsRaw);
JavaPairRDD<Integer, String> users = sc.parallelizePairs(usersRaw);
JavaPairRDD<Integer, Tuple2<Integer, String>> joinedRdd = visits.join(users);
joinedRdd.foreach(System.out::println);
sc.close();
答案 0 :(得分:1)
条款'System.out :: println'不可序列化,可以更改为:
joinedRdd.foreach(v->System.out.println(v));
或者对于驱动程序节点上的打印值,可以使用以下构造:
joinedRdd.collect().forEach(System.out::println);