我目前正在使用RDF模型。因此,我从数据库查询数据,使用Apache Jena生成模型并使用它们。虽然,我不想每次使用它们时都要查询模型,所以我想在本地存储它们。模型非常大,所以我想使用Apache Commons Compress压缩它们。到目前为止这是有效的(省略了try-catch-blocks):
public static void write(Map<String, Model> models, String file){
logger.info("Writing models to file " + file);
TarArchiveOutputStream tarOutput = null;;
TarArchiveEntry entry = null;
tarOutput = new TarArchiveOutputStream(new GzipCompressorOutputStream(new FileOutputStream(new File(file))));
for(Map.Entry<String, Model> e : models.entrySet()) {
logger.info("Packing model " + e.getKey());
// Convert Model
ByteArrayOutputStream baos = new ByteArrayOutputStream();
RDFDataMgr.write(baos,e.getValue(), RDFFormat.RDFXML_PRETTY);
// Prepare Entry
entry = new TarArchiveEntry(e.getKey());
entry.setSize(baos.size());
tarOutput.putArchiveEntry(entry);
// write into file and close
tarOutput.write(baos.toByteArray());
tarOutput.closeArchiveEntry();
}
tarOutput.close();
}
但是当我尝试另一个方向时,我得到了奇怪的NullPointerExceptions。这是GZip-Implementation中的错误还是我对Streams的理解错误了?
public static Map<String, Model> read(String file){
logger.info("Reading models from file " + file);
Map<String, Model> models = new HashMap<>();
TarArchiveInputStream tarInput = new TarArchiveInputStream(new GzipCompressorInputStream(new FileInputStream(file)));
for(TarArchiveEntry currentEntry = tarInput.getNextTarEntry();currentEntry != null; currentEntry= tarInput.getNextTarEntry()){
logger.info("Processing model " + currentEntry.getName());
// Read the current model
Model m = ModelFactory.createDefaultModel();
m.read(tarInput, null);
// And add it to the output
models.put(currentEntry.getName(),m);
tarInput.close();
}
return models;
}
这是堆栈跟踪:
Exception in thread "main" java.lang.NullPointerException
at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.read(GzipCompressorInputStream.java:271)
at java.io.InputStream.skip(InputStream.java:224)
at org.apache.commons.compress.utils.IOUtils.skip(IOUtils.java:106)
at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.skipRecordPadding(TarArchiveInputStream.java:345)
at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:272)
at de.mem89.masterthesis.rdfHydra.StorageHelper.read(StorageHelper.java:88)
at de.mem89.masterthesis.rdfHydra.StorageHelper.main(StorageHelper.java:124)