下载并将大文件拆分为blob存储中的100 MB块

时间:2017-06-06 03:47:09

标签: azure azure-storage-blobs

我在blob存储中有一个2GB的文件,正在构建一个控制台应用程序,可以将此文件下载到桌面。要求是拆分为100MB块并在文件名中附加一个数字。我不需要再次重新组合这些文件。我需要的只是文件块。

我目前有来自Azure download blob part

的代码

但是当文件大小已经是100MB并创建一个新文件时,我无法弄清楚如何停止下载。

任何帮助将不胜感激。

更新:这是我的代码

CloudStorageAccount account = CloudStorageAccount.Parse(connectionString);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference(containerName);
            var file = uri;
            var blob = container.GetBlockBlobReference(file);
            //First fetch the size of the blob. We use this to create an empty file with size = blob's size
            blob.FetchAttributes();
            var blobSize = blob.Properties.Length;
            long blockSize = (1 * 1024 * 1024);//1 MB chunk;
            blockSize = Math.Min(blobSize, blockSize);
            //Create an empty file of blob size
            using (FileStream fs = new FileStream(file, FileMode.Create))//Create empty file.
            {
                fs.SetLength(blobSize);//Set its size
            }
            var blobRequestOptions = new BlobRequestOptions
            {
                RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(5), 3),
                MaximumExecutionTime = TimeSpan.FromMinutes(60),
                ServerTimeout = TimeSpan.FromMinutes(60)
            };
            long startPosition = 0;
            long currentPointer = 0;
            long bytesRemaining = blobSize;
            do
            {
                var bytesToFetch = Math.Min(blockSize, bytesRemaining);
                using (MemoryStream ms = new MemoryStream())
                {
                    //Download range (by default 1 MB)
                    blob.DownloadRangeToStream(ms, currentPointer, bytesToFetch, null, blobRequestOptions);
                    ms.Position = 0;
                    var contents = ms.ToArray();
                    using (var fs = new FileStream(file, FileMode.Open))//Open that file
                    {
                        fs.Position = currentPointer;//Move the cursor to the end of file.
                        fs.Write(contents, 0, contents.Length);//Write the contents to the end of file.
                    }
                    startPosition += blockSize;
                    currentPointer += contents.Length;//Update pointer
                    bytesRemaining -= contents.Length;//Update bytes to fetch

                    Console.WriteLine(fileName + dateTimeStamp + ".csv " + (startPosition / 1024 / 1024) + "/" + (blob.Properties.Length / 1024 / 1024) + " MB downloaded...");
                }
            }
            while (bytesRemaining > 0);

1 个答案:

答案 0 :(得分:1)

根据我的理解,您可以将blob文件分解为预期的部分(100MB),然后利用CloudBlockBlob.DownloadRangeToStream下载每个文件块。这是我的代码片段,您可以参考它:

<强> ParallelDownloadBlob

private static void ParallelDownloadBlob(Stream outPutStream, CloudBlockBlob blob,long startRange,long endRange)
{
    blob.FetchAttributes();
    int bufferLength = 1 * 1024 * 1024;//1 MB chunk for download
    long blobRemainingLength = endRange-startRange;
    Queue<KeyValuePair<long, long>> queues = new Queue<KeyValuePair<long, long>>();
    long offset = startRange;
    while (blobRemainingLength > 0)
    {
        long chunkLength = (long)Math.Min(bufferLength, blobRemainingLength);
        queues.Enqueue(new KeyValuePair<long, long>(offset, chunkLength));
        offset += chunkLength;
        blobRemainingLength -= chunkLength;
    }
    Parallel.ForEach(queues,
        new ParallelOptions()
        {
            MaxDegreeOfParallelism = 5
        }, (queue) =>
        {
            using (var ms = new MemoryStream())
            {
                blob.DownloadRangeToStream(ms, queue.Key, queue.Value);
                lock (outPutStream)
                {
                    outPutStream.Position = queue.Key- startRange;
                    var bytes = ms.ToArray();
                    outPutStream.Write(bytes, 0, bytes.Length);
                }
            }
        });
}

计划主程

var container = storageAccount.CreateCloudBlobClient().GetContainerReference(defaultContainerName);
var blob = container.GetBlockBlobReference("code.txt");
blob.FetchAttributes();
long blobTotalLength = blob.Properties.Length;
long chunkLength = 10 * 1024; //divide blob file into each file with 10KB in size
for (long i = 0; i <= blobTotalLength; i += chunkLength)
{

    long startRange = i;
    long endRange = (i + chunkLength) > blobTotalLength ? blobTotalLength : (i + chunkLength);

    using (var fs = new FileStream(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, $"resources\\code_[{startRange}]_[{endRange}].txt"), FileMode.Create))
    {
        Console.WriteLine($"\nParallelDownloadBlob from range [{startRange}] to [{endRange}] start...");
        Stopwatch sp = new Stopwatch();
        sp.Start();

        ParallelDownloadBlob(fs, blob, startRange, endRange);
        sp.Stop();
        Console.WriteLine($"download done, time cost:{sp.ElapsedMilliseconds / 1000.0}s");
    }
}

<强> RESULT enter image description here

enter image description here

<强>更新

根据您的要求,我建议您可以将blob下载到单个文件中,然后利用LumenWorks.Framework.IO逐行读取大文件记录,然后检查已读取的字节大小并保存到新的csv文件,大小可达100MB。这是一段代码片段,您可以参考它:

using (CsvReader csv = new CsvReader(new StreamReader("data.csv"), true))
{
    int fieldCount = csv.FieldCount;
    string[] headers = csv.GetFieldHeaders();
    while (csv.ReadNextRecord())
    {
        for (int i = 0; i < fieldCount; i++)
            Console.Write(string.Format("{0} = {1};",
                          headers[i],
                          csv[i] == null ? "MISSING" : csv[i]));
        //TODO: 
        //1.Read the current record, check the total bytes you have read;
        //2.Create a new csv file if the current total bytes up to 100MB, then save the current record to the current CSV file.
    }
}

此外,您可以参考A Fast CSV ReaderCsvHelper了解详情。

<强> UPDATE2

使用固定字节将大型CSV文件拆分为小型CSV文件的代码示例,我使用CsvHelper 2.16.3获取以下代码段,您可以参考它:

string[] headers = new string[0];
using (var sr = new StreamReader(@"C:\Users\v-brucch\Desktop\BlobHourMetrics.csv")) //83.9KB
{
    using (CsvHelper.CsvReader csvReader = new CsvHelper.CsvReader(sr,
        new CsvHelper.Configuration.CsvConfiguration()
        {
            Delimiter = ",",
            Encoding = Encoding.UTF8
        }))
    {
        //check header
        if (csvReader.ReadHeader())
        {
            headers = csvReader.FieldHeaders;
        }

        TextWriter writer = null;
        CsvWriter csvWriter = null;
        long readBytesCount = 0;
        long chunkSize = 30 * 1024; //divide CSV file into each CSV file with byte size up to 30KB

        while (csvReader.Read())
        {
            var curRecord = csvReader.CurrentRecord;
            var curRecordByteCount = curRecord.Sum(r => Encoding.UTF8.GetByteCount(r)) + headers.Count() + 1;
            readBytesCount += curRecordByteCount;

            //check bytes you have read
            if (writer == null || readBytesCount > chunkSize)
            {
                readBytesCount = curRecordByteCount + headers.Sum(h => Encoding.UTF8.GetByteCount(h)) + headers.Count() + 1;
                if (writer != null)
                {
                    writer.Flush();
                    writer.Close();
                }
                string fileName = $"BlobHourMetrics_{Guid.NewGuid()}.csv";
                writer = new StreamWriter(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, fileName), true);
                csvWriter = new CsvWriter(writer);
                csvWriter.Configuration.Encoding = Encoding.UTF8;
                //output header field
                foreach (var header in headers)
                {
                    csvWriter.WriteField(header);
                }
                csvWriter.NextRecord();
            }
            //output record field
            foreach (var field in curRecord)
            {
                csvWriter.WriteField(field);
            }
            csvWriter.NextRecord();
        }
        if (writer != null)
        {
            writer.Flush();
            writer.Close();
        }
    }
}

<强> RESULT enter image description here