扩展DefaultCodec以支持Hadoop文件的Zip压缩

时间:2018-07-13 21:15:07

标签: apache-spark hadoop compression hdfs rdd

我有一些Spark代码,可从HDFS读取两个文件(头文件和主体文件),将RDD [String]简化为单个分​​区,然后使用GZip编解码器将结果写为压缩文件:

spark.sparkContext.textFile("path_to_header.txt,path_to_body.txt")
.coalesce(1)
.saveAsTextFile("output_path", classOf[GzipCodec])

这可以按预期100%进行。现在,我们被要求为无法自然解压缩* .gzip文件的Windows用户支持zip压缩。显然,本机不支持zip格式,因此我尝试推出自己的压缩编解码器。

运行代码时,我遇到了“ ZipException:没有当前的ZIP条目”异常:

Exception occured while exporting org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 2 times, most recent failure: Lost task 0.1 in stage 16.0 (TID 675, xxxxxxx.xxxxx.xxx, executor 16): java.util.zip.ZipException: no current ZIP entry
    at java.util.zip.ZipOutputStream.write(Unknown Source)
    at io.ZipCompressorStream.write(ZipCompressorStream.java:23)
    at java.io.DataOutputStream.write(Unknown Source)
    at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:81)
    at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:102)
    at org.apache.spark.SparkHadoopWriter.write(SparkHadoopWriter.scala:95)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1205)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1211)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1190)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)

我创建了一个扩展DefaultCodec的ZipCodec类:

public class ZipCodec extends DefaultCodec {

   @Override
   public CompressionOutputStream createOutputStream(final OutputStream out, final Compressor compressor) throws IOException {
      return new ZipCompressorStream(new ZipOutputStream(out));
   }

以及扩展CompressorStream的ZipCompressorStream:

public class ZipCompressorStream extends CompressorStream {

   public ZipCompressorStream(final ZipOutputStream out) {
      super(out);
   }

   @Override
   public void write(final int b) throws IOException {
      out.write(b);
   }

   @Override
   public void write(final byte[] data, final int offset, final int length) throws IOException {
      out.write(data, offset, length);
   }

我们当前正在使用Spark 1.6.0和Hadoop 2.6.0-cdh5.8.2

有什么想法吗?

谢谢!

1 个答案:

答案 0 :(得分:0)

ZIP是一种容器格式,而GZip只是一种类似于流的格式(用于存储一个文件)。这就是为什么在创建新的ZIP文件时,您需要首先启动一个条目(提供一个名称),然后在关闭容器之前编写关闭该条目的内容。在此处查看示例:https://www.programcreek.com/java-api-examples/?class=java.util.zip.ZipOutputStream&method=putNextEntry