我编写了代码来处理一个大的二进制文件(超过2 GB),每个文件以1024字节为单位读取。该文件包含数据块,每个块按顺序由两个字节分隔,5D5B = 0x5D 0x5B。
代码可以工作,但对于大文件,执行时间超过1:30小时,当我使用一种等效的Ruby脚本执行相同操作时,执行时间不到15分钟。
您可以使用下面的文件“input.txt”测试代码,您将看到它正确打印每个块。您可以使用“File.WriteAllBytes()...”行创建文件“input.txt”,或在Notepad中创建文件“input.txt”,其中包含以下内容(双引号):
“] [如何] [多] [话] [我们] [有] [在这里?] [6] [或] [更多?] ”
我正在使用BinaryReader类和seek方法读取此示例中的20个字节的块(带有大文件的1024个字节),因为文件 只包含50个字节,然后查找每个块中最后一个块开头的位置,并将其存储在var lastPos中,因为最后一个块可能不完整。
有没有办法改进我的代码以加快执行时间?
我不确定问题是BinaryReader还是成千上万的搜索操作。第一个目标是让每个块对每个块应用一些解析,但似乎大部分时间都在块的分离中消耗。
static void Main(string[] args)
{
File.WriteAllBytes("C:/input.txt", new byte[] { 0x5d, 0x5b, 0x48, 0x6f, 0x77, 0x5d, 0x5b, 0x6d, 0x61, 0x6e,
0x79, 0x5d, 0x5b, 0x77, 0x6f, 0x72, 0x64, 0x73, 0x5d, 0x5b,
0x77, 0x65, 0x5d, 0x5b, 0x68, 0x61, 0x76, 0x65, 0x5d, 0x5b,
0x68, 0x65, 0x72, 0x65, 0x3f, 0x5d, 0x5b, 0x36, 0x5d, 0x5b,
0x6f, 0x72, 0x5d, 0x5b, 0x6d, 0x6f, 0x72, 0x65, 0x3f, 0x5d } );
using (BinaryReader br = new BinaryReader(File.Open("C:/input.txt", FileMode.Open)))
{
int lastPos = 0;
int EachChunk = 20;
long ReadFrom = 0;
int c = 0;
int count = 0;
while(lastPos != -1 ) {
lastPos = -1;
br.BaseStream.Seek(ReadFrom, SeekOrigin.Begin);
byte[] data = br.ReadBytes(EachChunk);
//Loop to look for position of last clock in current chunk
int k = data.Length - 1;
while(k > 0 && lastPos == -1) {
lastPos = (data[k] == 91 && data[k-1] == 93 ? (k - 1) : (-1) );
k--;
}
if (lastPos != -1) {
Array.Resize(ref data, lastPos);
} // Resizing array up to the last block position
// Storing position of pointer where will begin next chunk
ReadFrom += lastPos + 2;
//Converting Binary data to string of hex numbers.
SoapHexBinary shb = new SoapHexBinary(data);
//Replace separator by Newline
string str = shb.ToString().Replace("5D5B", Environment.NewLine);
//Use StringReader to process each block as a line, using the newline as separator
using (StringReader reader = new StringReader(str))
{
// Loop over the lines(blocks) in the string.
string Block;
count = c;
while ((Block = reader.ReadLine()) != null)
{
if ((String.IsNullOrWhiteSpace(Block) ||
String.IsNullOrEmpty(Block)) == false) {
// +++++ Further process for each block +++++++++++++++++++++++++
count++;
Console.WriteLine("Block # {0}: {1}", count, Block);
// ++++++++++++++++++++++++++++++++++++++++++++++++++
}
}
}
c = count;
}
}
Console.ReadLine();
}
更新
我发现了一个问题。在Mike Burdick的代码中,缓冲区在找到5B时开始增长,并在找到5D时打印,但由于每个块由0x5D0x5B分隔,如果在任何块内单独使用5D或单独使用5B,则代码开始加载或清除缓冲区,只有在找到序列5D5B时才加载缓冲区,不仅在找到5B时,如果没有,结果也不同。
您可以使用此输入进行测试,我在块中添加了5D或5B。我只在找到5D5B并且可以加载缓冲区时恢复,因为5D5B就像“换行符”分隔符。
File.WriteAllBytes("C:/input1.txt", new byte[] {
0x5D, 0x5B, 0x48, 0x5D, 0x77, 0x5D, 0x5B, 0x6d, 0x5B, 0x6e,
0x5D, 0x5D, 0x5B, 0x77, 0x6f, 0x72, 0x64, 0x73, 0x5D, 0x5B,
0x77, 0x65, 0x5D, 0x5B, 0x68, 0x61, 0x76, 0x65, 0x5D, 0x5B,
0x68, 0x65, 0x72, 0x65, 0x3f, 0x5D, 0x5B, 0x36, 0x5D, 0x5B,
0x6f, 0x72, 0x5D, 0x5B, 0x6d, 0x6f, 0x72, 0x65, 0x3f, 0x5D });
更新2:
我已经尝试了Mike Burdick的代码,但没有给出正确的输出。例如,如果您更改输入文件的内容以包含此内容:
82-F] [如何]]] [毫安[NY] [字%] [我们] [[有] [这里?]]
输出应该是(以下输出以ASCII格式显示,以便更清楚地看到它):
82-F
How]]
ma[ny]
words%
we
[have
here?]]
除此之外,你认为BinaryReader是一种慢吗?当我用更大的文件测试时,执行仍然很慢。
更新#3:
我一直在测试Mike Burdick的代码。也许这不是Mike Burdick代码的最佳修改,因为我已经修改为处理可能出现在每个块中间的]
或[
。它似乎工作,如果文件以“]”结尾,似乎只能打印最后一个“]”。
例如,内容与之前相同:"][How][many][words][we][have][here?][6][or][more?]"
我对Mike Burdick代码的修改是:
static void OptimizedScan(string fileName)
{
const byte startDelimiter = 0x5d;
const byte endDelimiter = 0x5b;
using (BinaryReader reader = new BinaryReader(File.Open(fileName, FileMode.Open)))
{
List<byte> buffer = new List<byte>();
List<string> buffer1 = new List<string>();
bool captureBytes = false;
bool foundStartDelimiter = false;
int wordCount = 0;
SoapHexBinary hex = new SoapHexBinary();
while (true)
{
byte[] chunk = reader.ReadBytes(1024);
if (chunk.Length > 0)
{
foreach (byte data in chunk)
{
if (data == startDelimiter && foundStartDelimiter == false)
{
foundStartDelimiter = true;
}
else if (data == endDelimiter && foundStartDelimiter)
{
wordCount = DisplayWord(buffer, wordCount, hex);
// Start capturing
captureBytes = true;
foundStartDelimiter = false;
}
else if ((data == startDelimiter && foundStartDelimiter) ||
(data == endDelimiter && foundStartDelimiter == false))
{
buffer.Add(data);
}
else if (captureBytes)
{
buffer.Add(data);
}
}
}
else
{
break;
}
}
if (foundStartDelimiter)
{
buffer.Add(startDelimiter);
}
DisplayWord(buffer, wordCount, hex);
答案 0 :(得分:1)
我认为这在代码方面更快更简单:
static void OptimizedScan(string fileName)
{
const byte startDelimiter = 0x5d;
const byte endDelimiter = 0x5b;
using (BinaryReader reader = new BinaryReader(File.Open(fileName, FileMode.Open)))
{
List<byte> buffer = new List<byte>();
bool captureBytes = false;
bool foundStartDelimiter = false;
int wordCount = 0;
SoapHexBinary hex = new SoapHexBinary();
while (true)
{
byte[] chunk = reader.ReadBytes(1024);
if (chunk.Length > 0)
{
foreach (byte data in chunk)
{
if (data == startDelimiter)
{
foundStartDelimiter = true;
}
else if (data == endDelimiter && foundStartDelimiter)
{
wordCount = DisplayWord(buffer, wordCount, hex);
// Start capturing
captureBytes = true;
foundStartDelimiter = false;
}
else if (captureBytes)
{
if (foundStartDelimiter)
{
buffer.Add(startDelimiter);
}
buffer.Add(data);
}
}
}
else
{
break;
}
}
if (foundStartDelimiter)
{
buffer.Add(startDelimiter);
}
DisplayWord(buffer, wordCount, hex);
}
}