我在java程序中使用org.apache.parquet,它将json文件转换为镶木地板格式。然而,无论我尝试什么,我都无法禁用镶木地板自己的登录到stdout。 有没有办法改变镶木地板的测量水平,或完全关闭它?
stdout上的日志消息示例...
12-Feb-2017 18:12:21 INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 427B for [verb] BINARY: 2,890 values, 385B raw, 390B comp, 1 pages, encodings: [BIT_PACKED, PLAIN_DICTIONARY], dic { 2 entries, 17B raw, 2B comp}
12-Feb-2017 18:12:21 INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 3,256B for [postedTime] BINARY: 2,890 values, 3,585B raw, 3,180B comp, 1 pages, encodings: [BIT_PACKED, PLAIN_DICTIONARY], dic { 593 entries, 16,604B raw, 593B comp}
12-Feb-2017 18:12:21 INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 4,611B for [message] BINARY: 2,890 values, 4,351B raw, 4,356B comp, 1 pages, encodings: [BIT_PACKED, PLAIN_DICTIONARY], dic { 2,088 entries, 263,329B raw, 2,088B comp}
我如何称为镶木地板的例子......
public void writeToParquet(List<GenericData.Record> recordsToWrite, Path fileToWrite) throws IOException {
try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter
.<GenericData.Record>builder(fileToWrite)
.withSchema(SCHEMA)
.withConf(new Configuration())
.withCompressionCodec(CompressionCodecName.SNAPPY)
.build()) {
for (GenericData.Record record : recordsToWrite) {
writer.write(record);
}
}
}
答案 0 :(得分:0)
我知道这是一个老问题,但是我在CDH 5.x中使用Parquet with Hive时遇到了这个问题并找到了解决方法。见这里:https://stackoverflow.com/a/45572400/14186
也许其他人会发现它很有用。