我在Scala
中有一个现有代码,并尝试在Java
中编写相同的代码。但面临一些问题。
Scala代码:
import java.io.{BufferedReader, InputStreamReader}
import java.util.zip.ZipInputStream
import org.apache.spark.SparkContext
import org.apache.spark.input.PortableDataStream
import org.apache.spark.rdd.RDD
def readFile(path: String,minPartitions: Int): RDD[String] = {
if (path.endsWith(".zip")) {
sc.binaryFiles(path, minPartitions)
.flatMap {
case (name: String, content: PortableDataStream) =>
val zis = new ZipInputStream(content.open)
val entry = zis.getNextEntry
val br = new BufferedReader(new InputStreamReader(zis))
Stream.continually(br.readLine()).takeWhile(_ != null)
}
}
}
我在下面写了java代码 -
import org.apache.spark.input.PortableDataStream;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.rdd.RDD;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
public RDD<String> readFile(String inputDir, int minPartitions) throws Exception {
SparkSession sparkSession = null;
sparkSession = SparkSession.builder().appName("zipPoc").config("spark.master", "yarn").getOrCreate();
JavaSparkContext sc = new JavaSparkContext(sparkSession.sparkContext());
if (inputDir.endsWith(".zip")) {
sc.binaryFiles(inputDir, minPartitions).flatMap (
(String name , PortableDataStream content) -> {
ZipInputStream stream = new ZipInputStream(content.open());
ZipEntry entry = stream.getNextEntry();
BufferedReader br = new BufferedReader(new InputStreamReader(stream));
scala.collection.immutable.Stream.continually(br.readLine()).takeWhile(_ != null);
}
);
}
}
我收到以下错误。
任何人都有这方面的线索,并提供适当的代码帮助。
答案 0 :(得分:4)
continually
期望lambda没有参数并返回值。
Java等价物将是:
() -> br.readLine()
Java中也没有_
,您必须使用显式参数。
(line) -> line != null
所以这应该有效:
Stream.continually(() -> {
try {
return br.readLine();
} catch (IOException e) {
throw new RuntimeException(e);
}
}).takeWhile((line) -> line != null)
====
正如您注意到readLine
抛出已检查异常。最快的解决方法就是在try/catch
中打包电话。