我需要计算这两个独立的任务。以前我是这样依次进行的:
string firstHash = CalculateMD5Hash("MyName");
string secondHash = CalculateMD5Hash("NoName");
方法calculateMD5Hash
看起来像。它用于为最大16GB的文件计算MD5哈希值:
private string CalculateMD5(string filename)
{
using (var md5 = MD5.Create())
{
using (var stream = File.OpenRead(filename))
{
var hash = md5.ComputeHash(stream);
return BitConverter.ToString(hash).Replace("-", string.Empty).ToLowerInvariant();
}
}
}
但是由于这2个CalculateMD5Hash
方法可以并行运行,所以我尝试这样做:
Task<string> sequenceFileMd5Task = CalculateMD5("MyName");
Task<string> targetFileMD5task = CalculateMD5("NoName");
string firstHash = await sequenceFileMd5Task;
string secondHash = await targetFileMD5task;
我的CalculateMD5
方法如下:
private async Task<string> CalculateMD5(string filename)
{
using (var md5 = MD5.Create())
{
using (var stream = File.OpenRead(filename))
{
var hash = md5.ComputeHash(stream);
return BitConverter.ToString(hash).Replace("-", string.Empty).ToLowerInvariant();
}
}
}
我希望代码可以异步工作,但可以同步工作。
答案 0 :(得分:3)
这可能是I / O受限的,因此并行化它可能不会使事情加速很多(甚至可能使事情减速)。
话虽如此,代码的问题在于您没有创建任何新任务来在后台运行代码(仅指定async
不会创建任何线程)。
最简单的解决方案不是尝试“强制”使用异步,而是通过PLinq利用AsParallel
:
List<string> files = new List<string>()
{
"MyName",
"NoName"
};
var results = files.AsParallel().Select(CalculateMD5).ToList();
如果要限制用于此目的的线程数,可以按照以下示例使用WithDegreeOfParallelism()
,它将并行线程数限制为2:
var results = files.AsParallel().WithDegreeOfParallelism(2).Select(CalculateMD5).ToList();
但是请注意,如果有MD5.COmputeHashAsync()
之类的东西,您肯定希望将其与async/await
和Task.WhenAll()
一起使用-但这种东西不存在。 / p>
答案 1 :(得分:2)
您可以将函数主体更改为任务,然后等待结果。
private async Task<string> CalculateMD5(string filename)
{
return await Task.Run(() =>
{
using (var md5 = MD5.Create())
{
using (var stream = File.OpenRead(filename))
{
var hash = md5.ComputeHash(stream);
return BitConverter.ToString(hash).Replace("-", string.Empty).ToLowerInvariant();
}
}
});
}
答案 2 :(得分:2)
加快速度的一种方法是使用双缓冲,这样一个线程可以从文件读入一个缓冲区,而正在为另一个缓冲区计算MD5。
这使您可以将I / O与计算重叠。
执行此操作的最佳方法是执行一个任务,该任务负责计算所有数据块的Md5,但是由于那样会使代码复杂得多(并且不太可能产生更好的结果)我将为每个块创建一个新任务。
代码如下:
public static async Task<byte[]> ComputeMd5Async(string filename)
{
using (var md5 = MD5.Create())
using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.Read, 16384, FileOptions.SequentialScan | FileOptions.Asynchronous))
{
const int BUFFER_SIZE = 16 * 1024 * 1024; // Adjust buffer size to taste.
byte[] buffer1 = new byte[BUFFER_SIZE];
byte[] buffer2 = new byte[BUFFER_SIZE];
byte[] buffer = buffer1; // Double-buffered, so use 'buffer' to switch between buffers.
var task = Task.CompletedTask;
while (true)
{
buffer = (buffer == buffer1) ? buffer2 : buffer1; // Swap buffers for double-buffering.
int n = await file.ReadAsync(buffer, 0, buffer.Length);
await task;
task.Dispose();
if (n == 0)
break;
var block = buffer;
task = Task.Run(() => md5.TransformBlock(block, 0, n, null, 0));
}
md5.TransformFinalBlock(buffer, 0, 0);
return md5.Hash;
}
}
这是一个可编译的测试应用程序:
using System;
using System.Diagnostics;
using System.IO;
using System.Security.Cryptography;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
static async Task Main()
{
string file = @"C:\ISO\063-2495-00-Rev 1.iso";
Stopwatch sw = new Stopwatch();
for (int i = 0; i < 4; ++i) // Try several times.
{
sw.Restart();
var hash = await ComputeMd5Async(file);
Console.WriteLine("ComputeMd5Async() Took " + sw.Elapsed);
Console.WriteLine(string.Join(", ", hash));
Console.WriteLine();
sw.Restart();
hash = ComputeMd5(file);
Console.WriteLine("ComputeMd5() Took " + sw.Elapsed);
Console.WriteLine(string.Join(", ", hash));
Console.WriteLine();
}
}
public static byte[] ComputeMd5(string filename)
{
using var md5 = MD5.Create();
using var stream = File.OpenRead(filename);
md5.ComputeHash(stream);
return md5.Hash;
}
public static async Task<byte[]> ComputeMd5Async(string filename)
{
using (var md5 = MD5.Create())
using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.Read, 16384, FileOptions.SequentialScan | FileOptions.Asynchronous))
{
const int BUFFER_SIZE = 16 * 1024 * 1024; // Adjust buffer size to taste.
byte[] buffer1 = new byte[BUFFER_SIZE];
byte[] buffer2 = new byte[BUFFER_SIZE];
byte[] buffer = buffer1; // Double-buffered, so use 'buffer' to switch between buffers.
var task = Task.CompletedTask;
while (true)
{
buffer = (buffer == buffer1) ? buffer2 : buffer1; // Swap buffers for double-buffering.
int n = await file.ReadAsync(buffer, 0, buffer.Length);
await task;
task.Dispose();
if (n == 0)
break;
var block = buffer;
task = Task.Run(() => md5.TransformBlock(block, 0, n, null, 0));
}
md5.TransformFinalBlock(buffer, 0, 0);
return md5.Hash;
}
}
}
}
我得到的文件大小约为2.5GB的结果:
ComputeMd5Async() Took 00:00:04.8066365
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5() Took 00:00:06.9654982
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5Async() Took 00:00:04.7018911
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5() Took 00:00:07.3552470
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5Async() Took 00:00:04.6536709
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5() Took 00:00:06.9807878
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5Async() Took 00:00:04.7271215
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
ComputeMd5() Took 00:00:07.4089941
49, 54, 154, 19, 115, 198, 28, 163, 5, 182, 183, 91, 2, 5, 241, 253
因此异步双缓冲版本的运行速度提高了约50%。
也许有更快的方法,但这是一个相当简单的方法。