在单独的线程中关闭HDF5文件会引发访问冲突异常

时间:2017-08-28 16:12:49

标签: c# multithreading hdf5

我目前正在使用.Net的HDF.PInvoke包将输入数据写入连续的1GB文件。由于应用程序的高吞吐量特性,我希望减少关闭和创建新文件所花费的时间,以避免任何潜在的缓冲区溢出。当我在一个线程中运行测试时,释放文件的资源可能需要2秒钟,所以我想我会在一个单独的线程中运行这部分操作,这将使主线程自由创建一个新文件并再次开始写入数据。

以下是代码:

// create dataspace 
var dataspace_id = H5S.create_simple(dataSpaceRank, dataSpaceDimensions, null);

while (!cts.Token.IsCancellationRequested)
    {


        // ****** OPEN FILE ******



        // update filename
        fileName = DateTime.Now.ToString("dd_MM_yy") + "_Sparrow_DUT" + fileHead.chipID + "_" + fileHead.chipType + "_" + DateTime.Now.ToString("HH_mm") + "_" + fileHead.fileNum + ".h5";


        // create new file
        var file_id = H5F.create(Path.Combine(capturePath, fileName), H5F.ACC_TRUNC);



        // create dataset
        var dataset_id = H5D.create(file_id, "/capture_data", H5T.NATIVE_INT16, dataspace_id,
                        H5P.DEFAULT, dcpl, H5P.DEFAULT);




        // ****** START WRITING DATA ******


        while (packetsWritten <= packetsPerFile)
        {

            // select the subset of the dataspace the packet is to be written to
            status = H5S.select_hyperslab(dataspace_id, H5S.seloper_t.SET, offset, stride, count, block);

            // write dataset 
            status = H5D.write(dataset_id, H5T.NATIVE_INT16, memspace_id, dataspace_id, H5P.DEFAULT, dataPointer.AddrOfPinnedObject());

            // increment number of packets written
            packetsWritten++;

            // update the selection parameters
            offset[1] = packetsWritten * (ulong)fileHead.packetLen;

        }



        // ****** CLOSE FILE ******



        // close the file in a seperate thread (this operation delays the application the most)
        Task cleanUp = Task.Run(() => CleanUpHDF5Writer(dataset_id, file_id));



        // reset the offset
        offset[1] = 0;

        // reset packets written for new file
        packetsWritten = 0;



    }

其中:

private void CleanUpHDF5Writer(long dataset_id, long file_id)
    {

        // close dataset and release its resources
        H5D.close(dataset_id);

        // close file and release its resources
        H5F.close(file_id);

    }

在写入几个文件后抛出访问冲突异常 - 通常在CleanUpHDF5Writer函数中,但有时也在选择hyperslab时抛出。我无法弄清楚为什么会发生这种情况,因为新文件不应该访问与旧文件相同的资源(据我所知);

对此的任何帮助都非常感谢:)

0 个答案:

没有答案