I am trying out Spark Programming examples using Java 1.8 in Eclipse Luna and have the following code -
JavaPairRDD<String, Integer> counts = ones
.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect(); //Compilation Error
I am using M2Eclipse to build and create the jar and using spark-submit to execute the jar in my local. The jar is working and printing the correct output but Eclipse always shows the above mentioned line as a compilation error - The type Tuple2 is not generic; it cannot be parameterized with arguments <String, Integer>
Even the programming examples referred in the Spark webpage uses the same notation for Tuple2. https://spark.apache.org/docs/0.9.0/java-programming-guide.html
I am not able to understand why Eclipse is showing it as a compilation error since the return type of the collect call is a List<Tuple2<String,Integer>>
Any help is greatly appreciated.
答案 0 :(得分:0)
正如@Holger在评论中提到的,2个scala-library jar被添加到构建路径中。删除了早期版本,编译错误消失了。
答案 1 :(得分:0)
它也有助于我在IntelliJ Idea上的问题。我调用了一个函数,其中参数是一个泛型类型,以Tuple2作为参数。它总是显示错误,但我可以通过编译。这让我困惑了好几天。删除几个相关的分片罐(其中包含与scala-libiary相关的东西)后,错误消失了。