在我的HDFS上,我有一堆gzip文件,我想要解压缩到正常格式。有没有这样做的API?或者我怎么能写一个函数来做这个?
我不想使用任何命令行工具;相反,我想通过编写Java代码来完成这项任务。
答案 0 :(得分:5)
您需要CompressionCodec
来解压缩文件。 gzip的实现是GzipCodec
。您通过编解码器获得CompressedInputStream
,并通过简单的IO输出结果。这样的事情:说你有一个文件file.gz
//path of file
String uri = "/uri/to/file.gz";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
Path inputPath = new Path(uri);
CompressionCodecFactory factory = new CompressionCodecFactory(conf);
// the correct codec will be discovered by the extension of the file
CompressionCodec codec = factory.getCodec(inputPath);
if (codec == null) {
System.err.println("No codec found for " + uri);
System.exit(1);
}
// remove the .gz extension
String outputUri =
CompressionCodecFactory.removeSuffix(uri, codec.getDefaultExtension());
InputStream is = codec.createInputStream(fs.open(inputPath));
OutputStream out = fs.create(new Path(outputUri));
IOUtils.copyBytes(is, out, conf);
// close streams
<强>更新强>
如果您需要获取目录中的所有文件,您应该获得FileStatus
之类的
FileSystem fs = FileSystem.get(new Configuration());
FileStatus[] statuses = fs.listStatus(new Path("hdfs/path/to/dir"));
然后循环
for (FileStatus status: statuses) {
CompressionCodec codec = factory.getCodec(status.getPath());
...
InputStream is = codec.createInputStream(fs.open(status.getPath());
...
}
答案 1 :(得分:1)
我使用我在Scalding中编写的身份映射Hadoop作业来更改压缩/更改分割大小/等。
class IdentityMap(args: Args) extends ConfiguredJob(args) {
CombineFileMultipleTextLine(args.list("in"): _*).read.mapTo[String, String]('line -> 'line)(identity)
.write(if (args.boolean("compress")) TsvCompressed(args("out")) else TextLine(args("out")))
}
常规配置抽象类:
abstract class ConfiguredJob(args: Args) extends Job(args) {
override def config(implicit mode: Mode): Map[AnyRef, AnyRef] = {
val Megabyte = 1024 * 1024
val conf = super.config(mode)
val splitSizeMax = args.getOrElse("splitSizeMax", "1024").toInt * Megabyte
val splitSizeMin = args.getOrElse("splitSizeMin", "512").toInt * Megabyte
val jobPriority = args.getOrElse("jobPriority","NORMAL")
val maxHeap = args.getOrElse("maxHeap","512m")
conf ++ Map("mapred.child.java.opts" -> ("-Xmx" + maxHeap),
"mapred.output.compress" -> (if (args.boolean("compress")) "true" else "false"),
"mapred.min.split.size" -> splitSizeMin.toString,
"mapred.max.split.size" -> splitSizeMax.toString,
// "mapred.output.compression.codec" -> args.getOrElse("codec", "org.apache.hadoop.io.compress.BZip2Codec"), //Does not work, has to be -D flag
"mapred.job.priority" -> jobPriority)
}
}