以下代码显示了如何从azure blob存储中下载blob并将其保存到DataTable中:
foreach (var currIndexGroup in blobsGroupedByIndex)
{
DataRow dr = dtResult.NewRow();
foreach (var currIndex in currIndexGroup)
{
long fileByteLength = currIndex.Properties.Length;
byte[] serializedAndCompressedResult = new byte[fileByteLength];
currIndex.DownloadToByteArray(serializedAndCompressedResult, 0);
dr[currIndex.Metadata["columnName"]] = DeflateStream.UncompressString(serializedAndCompressedResult);
}
dtResult.Rows.Add(dr);
}
问题是,下载速度很慢。 1000个真正的小blob下载大约需要20秒。如果我尝试使用currIndex.DownloadToByteArrayAsync(serializedAndCompressedResult, 0);
异步运行它,则跟进行会抛出异常Bad state (invalid stored block lengths)
。
以异步方式填充此数据表的正确方法是什么?
答案 0 :(得分:2)
//the plan here is to make a model that holds your currIndex and byte array so you can return that model from a task
public class MyModel
{
public CloudBlockBlob CurrIndex {get;set;}
public byte[] FileBytes {get;set;}
}
foreach (var currIndexGroup in blobsGroupedByIndex)
{
var myTasks = new List<Task<MyModel>>();
foreach (var currIndex in currIndexGroup)
{
myTasks.Add(Task<MyModel>.Factory.StartNew(() =>
{
var myModel = new MyModel();
myModel.CurrIndex = currIndex;
long fileByteLength = myModel.CurrIndex.Properties.Length;
myModel.FileBytes = new byte[fileByteLength];
currIndex.DownloadToByteArray(myModel.FileBytes, 0);
return myModel;
});
}
Task.WaitAll(myTasks.ToArray());
foreach (var task in myTasks)
{
MyModel myModel = task.Result;
DataRow dr = dtResult.NewRow();
dr[myModel.CurrIndex.Metadata["columnName"]] = DeflateStream.UncompressString(myModel.FileBytes);
dtResult.Rows.Add(dr);
}
}
您可以在外部foreach循环中使用Parallel.ForEach
来提升并行度。您必须锁定dtResult
以使其线程安全。