如何使用MappedByteBuffer从Java中的映射文件逐行读取

时间:2019-05-31 10:28:45

标签: java csv io memory-mapped-files mappedbytebuffer

我想以非常快的方式读取大文件。我这样使用MappedByteBuffer

String line = "";

try (RandomAccessFile file2 = new RandomAccessFile(new File(filename), "r"))
        {

            FileChannel fileChannel = file2.getChannel();


            MappedByteBuffer buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, 0, fileChannel.size());


            for (int i = 0; i < buffer.limit(); i++)
            {
               char a = (char) buffer.get();
               if (a == '\n'){
                   System.out.println(line);  
                   line = "";
             }else{
                 line += Character.toString(c);


            }
        }

这不能正常工作。它正在更改文件的内容并打印更改的内容。有没有更好的方法可以使用MappedByteBuffer读取文件的一行?

最终,我想分割线并提取某些内容(由于其csv),所以这只是一个重现问题的最小示例。

1 个答案:

答案 0 :(得分:0)

我使用填充有随机字符串的21 GB文件进行了一些测试,每行的长度为20-40个字符。 似乎内置的BufferedReader仍然是最快的方法。

File f = new File("sfs");
try(Stream<String> lines = Files.lines(f.toPath(), StandardCharsets.UTF_8)){
    lines.forEach(line -> System.out.println(line));
} catch (IOException e) {}

将行读取到流中可确保您根据需要读取行,而不是一次读取整个文件。

要进一步提高速度,您可以适当增加BufferedReader的缓冲区大小。在我的测试中,启动器在大约1000万行的性能上超过了正常的缓冲区大小。

 CharsetDecoder decoder = StandardCharsets.UTF_8.newDecoder();
 int size = 8192 * 16;
 try (BufferedReader br = new BufferedReader(new InputStreamReader(newInputStream(f.toPath()), decoder), size)) {
        br.lines().limit(LINES_TO_READ).forEach(s -> {
     });
 } catch (IOException e) {
     e.printStackTrace();
 }

我用于测试的代码:

private static long LINES_TO_READ = 10_000_000;

private static void java8Stream(File f) {

    long startTime = System.nanoTime();

    try (Stream<String> lines = Files.lines(f.toPath(), StandardCharsets.UTF_8).limit(LINES_TO_READ)) {
        lines.forEach(line -> {
        });
    } catch (IOException e) {
        e.printStackTrace();
    }

    long endTime = System.nanoTime();
    System.out.println("no buffer took " + (endTime - startTime) + " nanoseconds");
}

private static void streamWithLargeBuffer(File f) {
    long startTime = System.nanoTime();

    CharsetDecoder decoder = StandardCharsets.UTF_8.newDecoder();
    int size = 8192 * 16;
    try (BufferedReader br = new BufferedReader(new InputStreamReader(newInputStream(f.toPath()), decoder), size)) {
        br.lines().limit(LINES_TO_READ).forEach(s -> {
        });
    } catch (IOException e) {
        e.printStackTrace();
    }

    long endTime = System.nanoTime();
    System.out.println("using large buffer took " + (endTime - startTime) + " nanoseconds");
}

private static void memoryMappedFile(File f) {
    CharsetDecoder decoder = StandardCharsets.UTF_8.newDecoder();

    long linesReadCount = 0;
    String line = "";
    long startTime = System.nanoTime();

    try (RandomAccessFile file2 = new RandomAccessFile(f, "r")) {

        FileChannel fileChannel = file2.getChannel();
        MappedByteBuffer buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, 0L, Integer.MAX_VALUE - 10_000_000);
        CharBuffer decodedBuffer = decoder.decode(buffer);

        for (int i = 0; i < decodedBuffer.limit(); i++) {
            char a = decodedBuffer.get();
            if (a == '\n') {
                line = "";
            } else {
                line += Character.toString(a);

            }
            if (linesReadCount++ >= LINES_TO_READ){
                break;
            }
        }
    } catch (FileNotFoundException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    }

    long endTime = System.nanoTime();

    System.out.println("using memory mapped files took " + (endTime - startTime) + " nanoseconds");

}

顺便说一句,我注意到FileChannel.map throws an exception如果映射文件大于Integer.MAX_VALUE,这使得该方法不适合读取非常大的文件。