Windows API似乎比BinaryWriter快得多 - 我的测试是否正确?

时间:2013-04-30 11:30:03

标签: c# optimization filestream

[编辑]

感谢@VilleKrumlinde我修复了一个错误,我在尝试避免代码分析警告时不小心引入了这个错误。我不小心打开了“重叠”文件处理,它不断重置文件长度。现在已经修复了,您可以多次为同一个流调用FastWrite()而不会出现问题。

[结束编辑]


概述

我正在做一些时序测试,以比较将结构数组写入磁盘的两种不同方法。我相信人们认为,与其他事物相比,I / O成本是如此之高,以至于不值得花太多时间优化其他事情。

然而,我的时间测试似乎表明不是这样。要么我犯了一个错误(这是完全可能的),要么我的优化确实非常重要。

记录

首先是一些历史记录:这个FastWrite()方法最初是在几年前编写的,用于支持将结构写入遗留C ++程序所使用的文件,我们仍在使用它来实现此目的。 (还有一个相应的FastRead()方法。)它的编写主要是为了更容易将blittable结构数组写入文件,其速度是次要问题。

不止一个人告诉我,这样的优化并不比使用BinaryWriter快得多,所以我终于咬了一口子并进行了一些计时测试。结果让我感到惊讶......

出现我的FastWrite()方法比使用BinaryWriter的方法快30到50倍。这看起来很荒谬,所以我在这里发布我的代码,看看是否有人能找到错误。

系统规范

  • 测试了x86 RELEASE版本,从调试器的OUTSIDE运行。
  • 在Windows 8,x64,16GB内存上运行。
  • 在普通硬盘(不是SSD)上运行。
  • 在Visual Studio 2012中使用.Net 4(安装了.Net 4.5)

结果

我的结果是:

SlowWrite() took 00:00:02.0747141
FastWrite() took 00:00:00.0318139
SlowWrite() took 00:00:01.9205158
FastWrite() took 00:00:00.0327242
SlowWrite() took 00:00:01.9289878
FastWrite() took 00:00:00.0321100
SlowWrite() took 00:00:01.9374454
FastWrite() took 00:00:00.0316074

正如您所看到的,这似乎表明FastWrite()在此次运行中的速度提高了50倍。

这是我的测试代码。运行测试后,我对两个文件进行了二进制比较,以验证它们确实完全相同(即FastWrite()SlowWrite()生成相同的文件。)

看看你能做些什么。 :)

using System;
using System.ComponentModel;
using System.Diagnostics;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading;
using Microsoft.Win32.SafeHandles;

namespace ConsoleApplication1
{
    internal class Program
    {

        [StructLayout(LayoutKind.Sequential, Pack = 1)]
        struct TestStruct
        {
            public byte   ByteValue;
            public short  ShortValue;
            public int    IntValue;
            public long   LongValue;
            public float  FloatValue;
            public double DoubleValue;
        }

        static void Main()
        {
            Directory.CreateDirectory("C:\\TEST");
            string filename1 = "C:\\TEST\\TEST1.BIN";
            string filename2 = "C:\\TEST\\TEST2.BIN";

            int count = 1000;
            var array = new TestStruct[10000];

            for (int i = 0; i < array.Length; ++i)
                array[i].IntValue = i;

            var sw = new Stopwatch();

            for (int trial = 0; trial < 4; ++trial)
            {
                sw.Restart();

                using (var output = new FileStream(filename1, FileMode.Create))
                using (var writer = new BinaryWriter(output, Encoding.Default, true))
                {
                    for (int i = 0; i < count; ++i)
                    {
                        output.Position = 0;
                        SlowWrite(writer, array, 0, array.Length);
                    }
                }

                Console.WriteLine("SlowWrite() took " + sw.Elapsed);
                sw.Restart();

                using (var output = new FileStream(filename2, FileMode.Create))
                {
                    for (int i = 0; i < count; ++i)
                    {
                        output.Position = 0;
                        FastWrite(output, array, 0, array.Length);
                    }
                }

                Console.WriteLine("FastWrite() took " + sw.Elapsed);
            }
        }

        static void SlowWrite(BinaryWriter writer, TestStruct[] array, int offset, int count)
        {
            for (int i = offset; i < offset + count; ++i)
            {
                var item = array[i];  // I also tried just writing from array[i] directly with similar results.
                writer.Write(item.ByteValue);
                writer.Write(item.ShortValue);
                writer.Write(item.IntValue);
                writer.Write(item.LongValue);
                writer.Write(item.FloatValue);
                writer.Write(item.DoubleValue);
            }
        }

        static void FastWrite<T>(FileStream fs, T[] array, int offset, int count) where T: struct
        {
            int sizeOfT = Marshal.SizeOf(typeof(T));
            GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);

            try
            {
                uint bytesWritten;
                uint bytesToWrite = (uint)(count * sizeOfT);

                if
                (
                    !WriteFile
                    (
                        fs.SafeFileHandle,
                        new IntPtr(gcHandle.AddrOfPinnedObject().ToInt64() + (offset*sizeOfT)),
                        bytesToWrite,
                        out bytesWritten,
                        IntPtr.Zero
                    )
                )
                {
                    throw new IOException("Unable to write file.", new Win32Exception(Marshal.GetLastWin32Error()));
                }

                Debug.Assert(bytesWritten == bytesToWrite);
            }

            finally
            {
                gcHandle.Free();
            }
        }

        [DllImport("kernel32.dll", SetLastError=true)]
        [return: MarshalAs(UnmanagedType.Bool)]

        private static extern bool WriteFile
        (
            SafeFileHandle hFile,
            IntPtr         lpBuffer,
            uint           nNumberOfBytesToWrite,
            out uint       lpNumberOfBytesWritten,
            IntPtr         lpOverlapped
        );
    }
}

跟进

我还测试了@ErenErsönmez提出的代码,如下所示(我在测试结束时验证了所有三个文件都是相同的):

static void ErenWrite<T>(FileStream fs, T[] array, int offset, int count) where T : struct
{
    // Note: This doesn't use 'offset' or 'count', but it could easily be changed to do so,
    // and it doesn't change the results of this particular test program.

    int size = Marshal.SizeOf(typeof(TestStruct)) * array.Length;
    var bytes = new byte[size];
    GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);

    try
    {
        var ptr = new IntPtr(gcHandle.AddrOfPinnedObject().ToInt64());
        Marshal.Copy(ptr, bytes, 0, size);
        fs.Write(bytes, 0, size);
    }

    finally
    {
        gcHandle.Free();
    }
}

我为该代码添加了测试,同时删除了行output.Position = 0;,以便文件现在增长到263K(这是一个合理的大小)。

通过这些更改,结果如下:

注意继续将文件指针重置为零时,查看FastWrite()次的缓慢程度!:

SlowWrite() took 00:00:01.9929327
FastWrite() took 00:00:00.1152534
ErenWrite() took 00:00:00.2185131
SlowWrite() took 00:00:01.8877979
FastWrite() took 00:00:00.2087977
ErenWrite() took 00:00:00.2191266
SlowWrite() took 00:00:01.9279477
FastWrite() took 00:00:00.2096208
ErenWrite() took 00:00:00.2102270
SlowWrite() took 00:00:01.7823760
FastWrite() took 00:00:00.1137891
ErenWrite() took 00:00:00.3028128

所以看起来你可以使用Marshaling 来实现几乎相同的速度,而根本不必使用Windows API。唯一的缺点是Eren的方法必须复制整个结构数组,如果内存有限,这可能是一个问题。

1 个答案:

答案 0 :(得分:18)

我不认为这与BinaryWriter有什么不同。我认为这是因为您在SlowWrite(10000 * 6)中执行多个文件IO而不是FastWrite中的单个IO。您的FastWrite的优势在于可以将单个blob字节写入文件。另一方面,您正在SlowWrite中逐个将结构转换为字节数组。

为了测试这个理论,我写了一个预先构建所有结构的大字节数组的方法,然后在SlowWrite中使用这个字节数组:

static byte[] bytes;
static void Prep(TestStruct[] array)
{
    int size = Marshal.SizeOf(typeof(TestStruct)) * array.Length;
    bytes = new byte[size];
    GCHandle gcHandle = GCHandle.Alloc(array, GCHandleType.Pinned);
    var ptr = gcHandle.AddrOfPinnedObject();
    Marshal.Copy(ptr, bytes, 0, size);
    gcHandle.Free();
}

static void SlowWrite(BinaryWriter writer)
{
    writer.Write(bytes);
}

结果:

SlowWrite() took 00:00:00.0360392
FastWrite() took 00:00:00.0385015
SlowWrite() took 00:00:00.0358703
FastWrite() took 00:00:00.0381371
SlowWrite() took 00:00:00.0373875
FastWrite() took 00:00:00.0367692
SlowWrite() took 00:00:00.0348295
FastWrite() took 00:00:00.0373931

请注意,SlowWrite现在与FastWrite的效果非常相似,我认为这表明性能差异不是由于实际的IO性能,而是与二进制转换过程有关。