锁定在这种情况下是否保证多个修改在线程中是原子的?

时间:2016-08-29 16:52:08

标签: c# multithreading concurrency

我正在尝试确定我在Programmers.SE question中概述的问题的解决方案。我现在面临的具体问题是我需要对System.Collections.Concurrent命名空间中的集合进行多次原子修改。据我所知,没有机制可以这样做;并发集合仅保证单个操作是原子的。

我不希望实现的问题的一个解决方案是创建我自己的并发集合,该集合为多个原子操作提供一些机制或方法。我想我有足够的经验来编写我自己的并发集合以允许多个原子修改,尽管我更喜欢开箱即用,开发良好的类。

鉴于此,我想到了另一种可能的解决方案,使用现有的开箱即用产品。 我的解决方案是使用lock来控制对执行多项修改的代码部分的访问,以便它们不会相互交错。

public interface IWork { }
public interface IResource { }

public sealed class WorkPerformer
{
    public static WorkPerformer Instance { get { return lazyInstance.Value; } }
    public static readonly Lazy<WorkPerformer> lazyInstance = new Lazy<WorkPerformer>(() => new WorkPerformer());

    private ConcurrentDictionary<IResource, ConcurrentQueue<Guid>> IResourceWaitQueues { get; set; }
    private ConcurrentDictionary<IWork, ConcurrentDictionary<IResource, Guid>> IWorkToPerform { get; set; }

    private readonly object _LockObj = new object();

    private WorkPerformer()
    {
        IResourceWaitQueues = new ConcurrentDictionary<IResource, ConcurrentQueue<Guid>>();
        IWorkToPerform = new ConcurrentDictionary<IWork, ConcurrentDictionary<IResource, Guid>>();
    }

    private void ModifierTask_MultipleAdds(IWork workToDo)
    {
        Task.Run(() =>
        {
            lock(_LockObj)
            {
                // -- The point is here I am making multiple additions to IResourceWaitQueues and IWorkToPerform 
                // Find all IResource this IWork uses and generate a Guid for each
                // Enqueue these Guid into their respective ConcurrentQueue's within IResourceWaitQueues
                // Add this IWork and IResource -> Guid mapping into IWorkToPerform
            }
        });
    }

    public void ModifierTask_MultipleRemoves(IWork workThatsDone)
    {
        Task.Run(() =>
        {
            lock (_LockObj)
            {
                // -- The point is here I am making multiple deletions to IResourceWaitQueues and IWorkToPerform 
                // Find all IResource that this IWork used to perform its work
                // Dequeue from the ConcurrentQueue respective to each IResource used from IResourceWaitQueues
                // Remove this ITask KeyValuePair from IWorkToPerform
            }
        });
    }
}

我想知道这个解决方案是否可以在上面的示例代码中允许多个原子操作到IResourceWaitQueuesIWorkToPerform

如果lock存在多个争用,我必须假设它有时会变慢。但除此之外,如果我正确理解lock这些我希望执行的多个修改不应相互交错,因为一次只能在lock ed代码中允许一个线程。

我看到的唯一其他问题是我认为我必须lock对上面示例代码中的IResourceWaitQueuesIWorkToPerform的每次访问权限进行lock除非当然可以将访问与using System; using System.Linq; using System.Threading; using System.Threading.Tasks; using System.Collections.Generic; namespace WorkProcessorSandbox { public interface IResource { } public interface IWork { void ProcessWork(); List<IResource> NeededResources { get; set; } } // This classes purpose is to process IWork objects by calling their ProcessWork methods when it is found that // the IResources they need to process are free. The nature of IResource objects is that they are not threadsafe // (though some may be; some must be if an IResource appears in NeededResources multiple times). As a result // care must be taken to make sure two IWork do not try to use a distinct IResource simultaneously. // This is done by a sort of signalling/ticketing system. Each time a new IWork comes in to be processed it "lines // up" for the IResources it needs. Only when it is at the front of the line for all IResource it needs will it // move on to process. By forcing atomicity on the "lining up" of the IWork for IResources deadlocks and race conditions // can be prevented because the order of an IWork "ticket" in a line can never interleave anothers. public sealed class WorkProcessor { // Singleton class public static WorkProcessor Instance { get { return lazyInstance.Value; } } public static readonly Lazy<WorkProcessor> lazyInstance = new Lazy<WorkProcessor>(() => new WorkProcessor()); // ResourceWaitQueues holds a Queue of Guids mapped to distinct // IResources representing the next IWork that is in line to use it private readonly object _Lock_ResourceDict = new object(); private Dictionary<IResource, Queue<Guid>> ResourceWaitQueues { get; set; } // WorkToProcess holds a Dictionary of Guid mapped to IResources representing // the place in line this IWork (that said Dictionary is mapped to) is in for use of the IResources. private readonly object _Lock_WorkDict = new object(); private Dictionary<IWork, Dictionary<IResource, Guid>> WorkToProcess { get; set; } private WorkProcessor() { Running = false; } private bool Running { get; set; } private CancellationToken ProcessingToken { get; set; } private CancellationTokenSource ProcessingTokenSource { get; set; } // Stops the processing of IWork from the WorkToProcess Dictionary public void StopProcessing() { if (Running) { ProcessingTokenSource.Cancel(); Running = false; } } // Starts (Allows) the processing of IWork from the WorkToProcess Dictionary public void StartProcessing() { if (!Running) { // Instantiate to Empty ResourceWaitQueues = new Dictionary<IResource, Queue<Guid>>(); WorkToProcess = new Dictionary<IWork, Dictionary<IResource, Guid>>(); // Create CancellationToken for use in controlling Tasks ProcessingTokenSource = new CancellationTokenSource(); ProcessingToken = ProcessingTokenSource.Token; Running = true; } } // The purpose of this method is to compare the list of Guids at the front of the Queues in ResourceWaitQueues // to the list of Guids that each IWork is waiting on for it to start processing. // If the Guids that an IWork needs to start processing is present in the list of Guids at the front of the Queues // then the IWork can start processing, otherwise it cannot. private void TryProcessWork() { if(Running) { // A Task that will go through all of the IWork waiting to be // processed and start processing the IWork objects that are ready. Task.Run(() => { // Here we need to lock on both the ResourceWaitQueues and WorkToProcess locks lock (_Lock_ResourceDict) { lock (_Lock_WorkDict) { // Go through the Dictionary of IWork waiting to be processed foreach (var waitingWork in WorkToProcess) { // Find the List<Guid> that are needed for this IWork to be processed var worksGuids = waitingWork.Value.Select(x => x.Value).ToList(); // Find the List<Guid> that are currently ready to be processed var guidsReadyToProcess = ResourceWaitQueues.Values.Select(x => { // If a Queue<T> is Empty when it is Peek'd it throws and Exception! if (x.Count > 0) return x.Peek(); return Guid.Empty; }).ToList(); // If the List<Guid> needed by this IWork is contained within the List<Guid> ready to be processed if (worksGuids.All(x => guidsReadyToProcess.Contains(x))) { // This IWork is ready to be processed! ProcessWork(waitingWork); // Remove this IWork from WorkToProcess if (!WorkToProcess.Remove(waitingWork.Key)) { Console.Out.WriteLine("Fatal error! Stopping work processing. Could not remove IWork from Dictionary that should contain it."); StopProcessing(); break; } } } } } }, ProcessingToken); } } // The purpose of this function is to "enqueue" IWork for processing. First a list of all the IResources // that the IWork needs to process is created along with a Guid for each unique IResource it uses. // These Guids are then enqueued into the respective Queue in ResourceWaitQueues representing this IWork's // "spot in line" to use those specific IResources. Finally the IWork and its Guids are then added to the // WorkToPerform Dictionary so that TryProcessWork can determine if it is ready to run or not. // TryProcess is called at the end to see if this IWork is possibly ready to process right away. public void EnqueueWork(IWork workToDo) { if (Running) { // Get all distinct IResource in the IWork's NeededResources var worksResources = workToDo.NeededResources.Distinct().ToList(); // Create the Guids this IWork object will wait on to start processing Dictionary<IResource, Guid> worksGuidResourceMap = new Dictionary<IResource, Guid>(); worksResources.ForEach(x => worksGuidResourceMap.Add(x, Guid.NewGuid())); // Here we need to lock on both the ResourceWaitQueues and WorkToProcess locks lock (_Lock_ResourceDict) { lock (_Lock_WorkDict) { // Find all of the IResources that are not currently present in the ResourceWaitQueues Dictionary var toAddResources = worksResources.Where(x => !ResourceWaitQueues.Keys.Contains(x)).ToList(); // Create a new entry in ResourceWaitQueues for these IResources toAddResources.ForEach(x => ResourceWaitQueues.Add(x, new Queue<Guid>())); // Add each Guid for this works IResources into the Queues in ResourceWaitQueues foreach (var aGuidResourceMap in worksGuidResourceMap) { foreach (var resourceQueue in ResourceWaitQueues) { if (aGuidResourceMap.Key == resourceQueue.Key) resourceQueue.Value.Enqueue(aGuidResourceMap.Value); } } // Add this IWork and its processing info to the Dictionary of awaiting IWork to be processed WorkToProcess.Add(workToDo, worksGuidResourceMap); } } // Go through the list of IWork waiting to be processed and start processing IWork that is ready TryProcessWork(); } } // The purpose of this function is to create a Task in which the IWork passed to it can be processed. // Once the processing is complete the Task then dequeues a single Guid from the Queue respective to // each IResource it needed to process. It then calls TryProcessWork because it is most likely possible // there is some IWork that is now ready to process. private void ProcessWork(KeyValuePair<IWork, Dictionary<IResource, Guid>> workToProcess) { Task.Run(() => { // Actually perform the work to be processed. workToProcess.Key.ProcessWork(); // Get the list of the IResources that were used during processing var usedResources = workToProcess.Value.Select(x => x.Key).ToList(); // We are removing multiple Guids from the ResourceWaitQueues. They must be atomic. // The ResourceWaitQueues could become incoherent if any other operations are performed on it during the dequeueing. // It is ok for WorkToProcess to be modified while this is happening. lock (_Lock_ResourceDict) { // Get the Queues corresponding to these IResources var resourceQueues = ResourceWaitQueues.Where(x => usedResources.Contains(x.Key)).Select(x => x.Value).ToList(); try { // Dequeue a Guid from each of these Queues exposing the next Guid to be processed on each resourceQueues.ForEach(x => x.Dequeue()); } catch (InvalidOperationException ex) { Console.Out.WriteLine("Fatal error! Stopping work processing. Could not dequeue a Guid that should exist: " + ex.Message); StopProcessing(); } } // Go through the list of IWork waiting to be processed and start processing IWork that is ready TryProcessWork(); }, ProcessingToken); } } } ed代码部分交错。

编辑:这是一个更完整的代码示例,其中包含一些有希望的有用的评论,说明我要解决的确切问题。再次作为参考,我在问Programmers.SE question中概述了问题和解决方案的替代措辞。

async

2 个答案:

答案 0 :(得分:2)

如果没有能够准确说明您的情景的好Minimal, Complete, and Verifiable code example,就无法肯定地说出来。但是根据你到目前为止的描述,似乎相当清楚的是,使用lock将解决你的主要问题(一些分组操作的原子性)

是否还需要对同一对象的所有其他访问使用lock取决于这些访问的内容以及它们与您使用lock保护的分组操作的关系。当然,没有必要仅使用lock来确保集合保持一致。他们的内部同步将确保。

但是如果你的分组操作本身就代表了某种类型的一致性,那么如果在分组操作正在进行时允许其他对象的访问是不正确的,那么,是的,你还需要使用lock,使用相同的_LockObj引用,以确保分组操作不会在任何其他访问发生的同时进行,这取决于数据结构的一致性。< / p>

如果您需要更具体的建议,请改进问题,以便明确所有这些操作的实际关系。


除了之外:您可能需要考虑遵循正常的.NET编码约定:将Pascal大小写的使用限制为方法和属性,并使用camel-casing作为字段。这将使读者更容易遵循您的代码。

我想说的是,对于接口命名的.NET约定(即始终以I开头的Pascal标识符),这是一个非常糟糕的选择。当你这样做时,你肯定会让人们很难理解你的代码。

答案 1 :(得分:1)

为了最大限度地提高性能,您应该避免在IResource方法的持续时间内锁定X个IWork.ProcessWork个对象。问题是如果你有一个IWork对象需要10个IResource个对象,其中9个资源可能只需几毫秒来处理,而第10个可能需要几分钟,在这种情况下,所有10个资源对象将被锁定,以便其他IWork对象无法在完成工作所需的全部时间内使用它们。

通过创建LockResource方法和ReleaseResource方法,您可以使用ConcurrentDictionary设计,而无需将其包裹在lock中,因为您只会执行原子操作,即将IResource添加到ResourceWaitQueue并从IResource中删除ResourceWaitQueue。这将允许您的IWork对象以有效的方式执行,其中唯一的瓶颈是实际资源而不是代码。