MappedByteBuffer与ByteBuffer的性能

时间:2013-05-28 23:05:13

标签: java nio memory-mapped-files

我正在尝试进行一些性能增强,并希望使用内存映射文件来编写数据。我做了一些测试,令人惊讶的是,MappedByteBuffer似乎比分配直接缓冲区慢。我无法清楚地理解为什么会出现这种情况。有人可以暗示一下幕后会发生什么吗?以下是我的测试结果:

我正在分配32KB缓冲区。在开始测试之前,我已经创建了大小为3Gigs的文件。因此,增加文件不是问题。

Test results DirectBuffer vs MappedByteBuffer

我正在添加用于此性能测试的代码。任何有关此行为的输入/解释都非常感谢。

import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.FileChannel.MapMode;

public class MemoryMapFileTest {

    /**
     * @param args
     * @throws IOException 
     */
    public static void main(String[] args) throws IOException { 

        for (int i = 0; i < 10; i++) {
            runTest();
        }

    }   

    private static void runTest() throws IOException {  

        // TODO Auto-generated method stub
        FileChannel ch1 = null;
        FileChannel ch2 = null;
        ch1 = new RandomAccessFile(new File("S:\\MMapTest1.txt"), "rw").getChannel();
        ch2 = new RandomAccessFile(new File("S:\\MMapTest2.txt"), "rw").getChannel();

        FileWriter fstream = new FileWriter("S:\\output.csv", true);
        BufferedWriter out = new BufferedWriter(fstream);


        int[] numberofwrites = {1,10,100,1000,10000,100000};
        //int n = 10000;
        try {
            for (int j = 0; j < numberofwrites.length; j++) {
                int n = numberofwrites[j];
                long estimatedTime = 0;
                long mappedEstimatedTime = 0;

                for (int i = 0; i < n ; i++) {
                    byte b = (byte)Math.random();
                    long allocSize = 1024 * 32;

                    estimatedTime += directAllocationWrite(allocSize, b, ch1);
                    mappedEstimatedTime += mappedAllocationWrite(allocSize, b, i, ch2);

                }

                double avgDirectEstTime = (double)estimatedTime/n;
                double avgMapEstTime = (double)mappedEstimatedTime/n;
                out.write(n + "," + avgDirectEstTime/1000000 + "," + avgMapEstTime/1000000);
                out.write("," + ((double)estimatedTime/1000000) + "," + ((double)mappedEstimatedTime/1000000));
                out.write("\n");
                System.out.println("Avg Direct alloc and write: " + estimatedTime);
                System.out.println("Avg Mapped alloc and write: " + mappedEstimatedTime);

            }


        } finally {
            out.write("\n\n"); 
            if (out != null) {
                out.flush();
                out.close();
            }

            if (ch1 != null) {
                ch1.close();
            } else {
                System.out.println("ch1 is null");
            }

            if (ch2 != null) {
                ch2.close();
            } else {
                System.out.println("ch2 is null");
            }

        }
    }


    private static long directAllocationWrite(long allocSize, byte b, FileChannel ch1) throws IOException {
        long directStartTime = System.nanoTime();
        ByteBuffer byteBuf = ByteBuffer.allocateDirect((int)allocSize);
        byteBuf.put(b);
        ch1.write(byteBuf);
        return System.nanoTime() - directStartTime;
    }

    private static long mappedAllocationWrite(long allocSize, byte b, int iteration, FileChannel ch2) throws IOException {
        long mappedStartTime = System.nanoTime();
        MappedByteBuffer mapBuf = ch2.map(MapMode.READ_WRITE, iteration * allocSize, allocSize);
        mapBuf.put(b);
        return System.nanoTime() - mappedStartTime;
    }

}

2 个答案:

答案 0 :(得分:6)

你正在测试错误的东西。在任何一种情况下,这都不是如何编写代码的。您应该分配缓冲区一次,并继续更新其内容。您在写入时间中包含分配时间。无效。

答案 1 :(得分:0)

将数据交换到磁盘是MappedByteBuffer比DirectByteBuffer慢的主要原因。 使用直接缓冲区(包括MappedByteBuffer),分配和释放的成本很高,这是两个示例的成本,因此写入磁盘的唯一区别是MappedByteBuffer而不是Direct Byte Buffer