我有一个专有的文件格式,可用作存档。它需要一个装满文件的文件夹,并将它们打包成一个没有压缩的文件 - 文件的前X个字节专用于"目录" - 归档文件中的文件路径,大小(以字节为单位)及其位置(字节索引)。剩余的字节用于每个文件的实际数据。
这种格式有效,并且已经工作了几年,除了在我尝试调试的少数情况。在某些情况下,文件无法正常取消归档根据我的经验,这通常是笔记本电脑,我认为它有5400转硬盘。但有时它会在SSD硬盘驱动器上失败(如Surface Book)而且失败也不一致。如果我要解决同一个文件10次问题"问题"在某些情况下,机器可能只会失败1到2次或根本不会失败。
在我取消归档此格式(as3)的语言中,文件流阅读器具有属性' readAhead' - 文档指出的是'在异步读取文件时从磁盘读取的最小数据量' 。这个价值会影响我的归档吗?我最初的价值是' 8192'我现在改为8192/4来测试一些新机器。有没有人对此有任何想法? readAhead值是否无关紧要?
我意识到这很模糊。我不是在寻找具体的解决方案,只是想从那些对如何更好地诊断和解决这个问题有更多经验的人那里得到一些反馈。
这是有问题的课程。我试图删除与我所询问的内容无关的任何内容:
/**
* ...
* @author Phil
*/
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.events.IOErrorEvent;
import flash.events.ProgressEvent;
import flash.filesystem.File;
import flash.filesystem.FileMode;
import flash.filesystem.FileStream;
import flash.utils.ByteArray;
import flash.utils.Endian;
public class Archive extends EventDispatcher
{
public static const UNARCHIVING_COMPLETE:String = "unarchivingComplete";
public static const UNARCHIVING_PROGRESS:String = "unarchivingProgress";
public static const UNARCHIVE_CANCEL:String = "unarchiveCancel";
public static const ENDIAN:String = Endian.LITTLE_ENDIAN;
private var _inputFile:File;
private var _outputFile:File;
private var _inputStream:FileStream;
private var _outputStream:FileStream;
private var _files:Vector.<ArchiveItem>;
private var _readAheadValue:uint = 8192 / 4;
private var _maxTableSize:uint = 40960 * 30;
private var _tableData:ByteArray;
private var _curArchiveItem:ArchiveItem;
private var _currentArchiveItemBytesWritten:uint;
private var _pointerPosition:uint = 0;
private var _tableSize:uint = 0;
private var _totalSize:uint = 0;
private var _totalFiles:uint = 0;
public function Archive()
{
}
public function readArchive(archive:File, dest:File):void
{
_inputFile = archive;
_outputFile = dest;
createReadStream();
createTableData();
_inputStream.openAsync( _inputFile, FileMode.READ );
}
public function destroy():void
{
killStreams();
_inputFile = null;
_outputFile = null;
}
public function cancel():void
{
killStreams();
}
private function killStreams():void
{
killInStream();
killOutStream();
}
private function killInStream():void
{
if (!_inputStream) return;
_inputStream.removeEventListener(Event.COMPLETE, onFileReadComplete);
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onFileReadProgress);
_inputStream.removeEventListener(Event.COMPLETE, onArhiveReadComplete);
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onTableReadProgress);
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onArchiveReadProgress);
_inputStream.removeEventListener(Event.CLOSE, onInputClosed);
_inputStream.removeEventListener(IOErrorEvent.IO_ERROR, onErrorReadingArchive);
_inputStream.close();
_inputStream = null;
}
private function killOutStream():void
{
if (!_outputStream) return;
_outputStream.removeEventListener(IOErrorEvent.IO_ERROR, onIOError);
_outputStream.removeEventListener(Event.CLOSE, onOutPutClosed);
_outputStream.close();
_outputStream = null;
}
private function createTableData():void
{
_files = new Vector.<ArchiveItem>();
_tableData = new ByteArray();
_tableData.endian = ENDIAN;
}
private function createReadStream():void
{
_inputStream = new FileStream();
_inputStream.endian = ENDIAN;
_inputStream.readAhead = _readAheadValue;
_inputStream.addEventListener(Event.CLOSE, onInputClosed);
_inputStream.addEventListener(Event.COMPLETE, onArhiveReadComplete);
_inputStream.addEventListener(ProgressEvent.PROGRESS, onTableReadProgress);
_inputStream.addEventListener(IOErrorEvent.IO_ERROR, onErrorReadingArchive);
}
private function onErrorReadingArchive(e:IOErrorEvent):void
{
dispatchEvent( new Event(Event.CANCEL) );
}
private function onArhiveReadComplete(e:Event):void
{
if (_tableData.length < _maxTableSize)
{
onTableReadProgress( null, true);
}
}
private function onTableReadProgress(e:ProgressEvent, force:Boolean = false):void
{
if (_tableData.length < _maxTableSize && force == false)
{
_inputStream.readBytes( _tableData,_tableData.length );
}else {
_inputStream.removeEventListener(Event.COMPLETE, onArhiveReadComplete);
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onTableReadProgress);
populateTable( _tableData );
}
return;
if (_inputStream.bytesAvailable < _maxTableSize && force == false)
{
return;
}else {
_inputStream.removeEventListener(Event.COMPLETE, onArhiveReadComplete);
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onTableReadProgress);
var ba:ByteArray = new ByteArray();
ba.endian = ENDIAN;
_inputStream.readBytes(ba);
populateTable( ba );
}
}
private function populateTable(tableData:ByteArray):void
{
var a:ArchiveItem;
var offset:uint = 0;
var size:uint = 0;
var fileName:String;
if (tableData is ByteArray)
{
tableData.position = 0;
}
for (;;)
{
offset = tableData.readUnsignedInt();
size = tableData.readUnsignedInt();
fileName = tableData.readUTF();
if (fileName == "endTable")
{
_tableSize = tableData.position;
_totalFiles = _files.length;
_totalSize = _inputFile.size;
completeTableRead();
break;
}
a = new ArchiveItem();
a.filename = fileName;
a.offset = offset;
a.size = size;
_files.push(a);
}
}
private function completeTableRead():void
{
createFileOutputStream();
_inputStream.readAhead = _readAheadValue;
_inputStream.removeEventListener(Event.COMPLETE, onArhiveReadComplete);
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onTableReadProgress);
_inputStream.addEventListener(ProgressEvent.PROGRESS, onArchiveReadProgress);
_inputStream.addEventListener(Event.COMPLETE, onArchiveReadProgress);
writeNextArchiveItemToFile();
}
private function onInputClosed(e:Event):void
{
completeUnarchiving();
}
private function completeUnarchiving():void
{
killStreams();
dispatchEvent( new Event(UNARCHIVING_COMPLETE) );
}
private function createFileOutputStream():void
{
_outputStream = new FileStream();
_outputStream.endian = ENDIAN;
_outputStream.addEventListener(Event.CLOSE, onOutPutClosed);
_outputStream.addEventListener(IOErrorEvent.IO_ERROR, onIOError);
}
private function onOutPutClosed(e:Event):void
{
completeUnarchiving();
}
private function onIOError(e:IOErrorEvent):void
{
dispatchEvent( new Event(Event.CANCEL) );
}
private function writeNextArchiveItemToFile():void
{
if (_files.length == 0)
{
endWriting();
return;
}
_curArchiveItem = _files.shift();
_currentArchiveItemBytesWritten = 0;
var dest:File = new File();
dest.nativePath = _outputFile.nativePath + File.separator + _curArchiveItem.filename;
_outputStream.open(dest, FileMode.WRITE);
movePointer();
}
private function endWriting():void
{
_inputStream.removeEventListener(ProgressEvent.PROGRESS, onArchiveReadProgress);
_inputStream.removeEventListener(Event.COMPLETE, onArchiveReadProgress);
_outputStream.removeEventListener(IOErrorEvent.IO_ERROR, onIOError);
_outputStream.close();
_inputStream.close();
}
private function onOutputStreamCloseOnCompelte(e:Event):void
{
dispatchEvent( new Event(UNARCHIVING_COMPLETE) );
}
private function movePointer():void
{
_inputStream.position = _tableSize + _curArchiveItem.offset;
_pointerPosition = _inputStream.position;
if (_curArchiveItem.size == 0)
{
writeNextArchiveItemToFile();
}
}
private function onArchiveReadProgress(e:Event):void
{
if (_currentArchiveItemBytesWritten >= _curArchiveItem.size)
{
writeNextArchiveItemToFile();
return;
}
writeBytesToDisk();
}
private function writeBytesToDisk():void
{
var bytes:ByteArray = new ByteArray();
var bytesRemaining:uint = _curArchiveItem.size - _currentArchiveItemBytesWritten;
var bytesToWrite:uint = _inputStream.bytesAvailable;
if (bytesToWrite > bytesRemaining)
{
bytesToWrite = bytesRemaining;
}
_inputStream.readBytes(bytes, 0, bytesToWrite);
try {
_outputStream.writeBytes(bytes, 0, bytes.length); //This throws an error on large files.
}catch (e:Error)
{
dispatchEvent( new Event(Event.CANCEL) );
return;
}
_currentArchiveItemBytesWritten += bytes.length;
_pointerPosition = _inputStream.position;
dispatchEvent( new Event(UNARCHIVING_PROGRESS) );
if (_currentArchiveItemBytesWritten >= _curArchiveItem.size)
{
writeNextArchiveItemToFile();
}
}
}
}
class ArchiveItem
{
public var offset:uint;
public var size:uint;
public var filename:String;
public function ArchiveItem()
{
}
}
答案 0 :(得分:0)
评论太长了。
仅仅是我的个人意见,但我认为你已经过度完成了一项简单的任务。您对.readAhead
的使用令人困惑。对于非档案部分,为什么没有..
(1)启动时,您的应用会创建&amp;从目录中填写 Object 条目已解析(我自己只需制作并更新3个数组,注明文件名称 ,尺寸以及存档文件中的指数。
(2)为了解压缩存档中的第5个文件,您只需检查dest_Name = fileNames[4];
的预期输出名称字符串,然后还注意预期的dest_Size = size[4];
整数(要提取的长度)
(3)现在只需读取存档文件(在磁盘上)并保存第5个文件。
Archive_Stream.openAsync( URL_Archive_File, FileMode.READ );
Archive_Stream.position = dest_indexInArchive[4]; //starting index
上面会将字节读入temp_BA
,然后可以保存(当READ完成时,处理函数对该单个提取文件的磁盘执行WRITE)。现在,您可以使用temp_BA.clear();
重置并清除它以获取其他文件(或只是回收使用的内存)。