因此,我可以通过以下方式阻止/块将大文件上传到Azure存储:
// upload blocks
blockBlob.PutBlock(blockId: blockID,
blockData: new MemoryStream(buffer),
contentMD5: blockHash,
accessCondition: null,
options: new BlobRequestOptions()
{
StoreBlobContentMD5 = true,
UseTransactionalMD5 = true
},
operationContext: null);
// set hash for the entire blob
blockBlob.Properties.ContentMD5 = fileHash;
// commit all uploaded blobs
blockBlob.PutBlockList(blockList: blockIDs,
accessCondition: null,
options: new BlobRequestOptions()
{
StoreBlobContentMD5 = true,
UseTransactionalMD5 = true
},
operationContext: null);
现在,我试图通过以下方式再次在块/块中下载这个大文件:
// download blocks
blockBlob.DownloadRangeToByteArray(target: blobContents,
index: 0,
blobOffset: startPosition,
length: blobContents.Length,
accessCondition: null,
options: new BlobRequestOptions()
{
UseTransactionalMD5 = true
},
operationContext: null);
我在整个文件的blob属性MD5哈希中找到了,而不是单独的块/块。我可以在下载所有块/块后验证这一点,即
if (fileHash == blob.Properties.ContentMD5){ /* download was good */ }
为了节省下载时间,有没有办法获取每个块/块的MD5哈希值,以便在下载新块/块之前对其进行比较?
思想?