C ++ 11:如何实现快速,轻量级和公平的同步资源访问

时间:2014-02-14 19:35:24

标签: windows multithreading c++11 synchronization

问题

如何在保证线程无法在另一个线程获取并释放资源之前重新获取资源的同时,如何获得提供最小且稳定延迟的锁定机制?

这个问题答案的可取性如下:

  1. 内置C ++ 11功能的某些组合可在Windows 7上的MinGW中使用(请注意<thread><mutex>库在Windows平台上不起作用)

  2. Windows API功能的某种组合

  3. 对下面列出的FairLock的修改,我自己尝试实施这样的机制

  4. 一个免费的开源库提供的一些功能,不需要.configure / make / make安装过程,(让它在MSYS中工作更多的是冒险而不是我关心)

  5. 背景

    我正在编写一个实际上是多阶段生产者/消费者的应用程序。一个线程生成另一个线程消耗的输入,这会产生另一个线程消耗的输出。应用程序使用缓冲区对,这样在初始延迟之后,所有线程几乎可以同时工作。

    由于我正在编写Windows 7应用程序,因此我一直在使用CriticalSections来保护缓冲区。使用CriticalSections(或者,据我所知,任何其他Windows或C ++ 11内置同步对象)的问题在于,它不允许任何刚刚发布锁定的线程无法重新获取它的规定直到另一个线程首先完成。因此,我的许多中间线程(Encoder)的测试驱动程序从未给Encoder提供获取测试输入缓冲区的机会,并且未经测试就完成了。最终结果是一个荒谬的过程,试图确定随机工作的人工等待时间。

    由于我的应用程序的结构要求每个阶段等待另一个阶段已经获得,完成使用,并释放了必要的缓冲区以便再次使用缓冲区,我需要,因为缺乏更好的术语,公平锁定机制。我写了一篇文章(源代码在下面提供)。在测试中,这个FairLock允许我的测试驱动程序以与我使用CriticalSection可能达到的相同速度运行我的编码器,可能有60%的运行。其他40%的运行时间要长10到100毫秒,这对我的应用来说是不可接受的。

    FairLock

    // FairLock.hpp
    #ifndef FAIRLOCK_HPP
    #define FAIRLOCK_HPP
    #include <atomic>
    using namespace std;
    class FairLock {
        private:
            atomic_bool owned {false};
            atomic<DWORD> lastOwner {0};
        public:
            FairLock(bool owned);
            bool inline hasLock() const;
            bool tryLock();
            void seizeLock();
            void tryRelease();
            void waitForLock();
    };
    #endif
    
    // FairLock.cpp
    #include <windows.h>
    #include "FairLock.hpp"
    #define ID GetCurrentThreadId()
    
    FairLock::FairLock(bool owned) {
        if (owned) {
            this->owned = true;
            this->lastOwner = ID;
        } else {
            this->owned = false;
            this->lastOwner = 0;
        }
    }
    
    bool inline FairLock::hasLock() const {
        return owned && lastOwner == ID;
    }
    
    bool FairLock::tryLock() {
        bool success = false;
        DWORD id = ID;
        if (owned) {
            success = lastOwner == id;
        } else if (
            lastOwner != id &&
            owned.compare_exchange_strong(success, true)
        ) {
            lastOwner = id;
            success = true;
        } else {
            success = false;
        }
        return success;
    }
    
    void FairLock::seizeLock() {
        bool success = false;
        DWORD id = ID;
        if (!(owned && lastOwner == id)) {
            while (!owned.compare_exchange_strong(success, true)) {
                success = false;
            }
            lastOwner = id;
        }
    }
    
    void FairLock::tryRelease() {
        if (hasLock()) {
            owned = false;
        }
    }
    
    void FairLock::waitForLock() {
        bool success = false;
        DWORD id = ID;
        if (!(owned && lastOwner == id)) {
            while (lastOwner == id); // spin
            while (!owned.compare_exchange_strong(success, true)) {
                success = false;
            }
            lastOwner = id;
        }
    }
    

    修改

    请勿使用FairLock CLASS;它不保证相互排除!

    我查看了上面的代码,将它与 C ++编程语言:第4版文本进行比较,我没有仔细阅读过,以及CouchDeveloper推荐的Synchronous Queue。我意识到有几个序列,刚刚释放FairLock的线程可以被欺骗,认为它仍然拥有它。所需要的是交错指令如下:

    New owner: set owned to true
    Old owner: is owned true?  yes
    Old owner: am I the last owner? yes
    New owner: set me as the last owner
    

    此时,新老主人都进入他们的关键部分。

    我正在考虑这个问题是否有解决方案,是否值得尝试解决这个问题。在此期间,除非您看到修复,否则不要使用它。

3 个答案:

答案 0 :(得分:4)

我将使用condition_variable - 每thread设置在C ++ 11中实现此功能,以便我可以准确选择在(Live demo at Coliru)时唤醒哪个线程:

class FairMutex {
private:
  class waitnode {
    std::condition_variable cv_;
    waitnode* next_ = nullptr;
    FairMutex& fmtx_;
  public:
    waitnode(FairMutex& fmtx) : fmtx_(fmtx) {
      *fmtx.tail_ = this;
      fmtx.tail_ = &next_;
    }

    ~waitnode() {
      for (waitnode** p = &fmtx_.waiters_; *p; p = &(*p)->next_) {
        if (*p == this) {
          *p = next_;
          if (!next_) {
            fmtx_.tail_ = &fmtx_.waiters_;
          }
          break;
        }
      }
    }

    void wait(std::unique_lock<std::mutex>& lk) {
      while (fmtx_.held_ || fmtx_.waiters_ != this) {
        cv_.wait(lk);
      }
    }

    void notify() {
      cv_.notify_one();
    }
  };

  waitnode* waiters_ = nullptr;
  waitnode** tail_ = &waiters_;
  std::mutex mtx_;
  bool held_ = false;

public:
  void lock() {
    auto lk = std::unique_lock<std::mutex>{mtx_};
    if (held_ || waiters_) {
      waitnode{*this}.wait(lk);
    }
    held_ = true;
  }

  bool try_lock() {
    if (mtx_.try_lock()) {
      std::lock_guard<std::mutex> lk(mtx_, std::adopt_lock);
      if (!held_ && !waiters_) {
        held_ = true;
        return true;
      }
    }
    return false;
  }

  void unlock() {
    std::lock_guard<std::mutex> lk(mtx_);
    held_ = false;
    if (waiters_ != nullptr) {
      waiters_->notify();
    }
  }
};

FairMutexLockable concept建模,因此可以像任何其他标准库互斥类型一样使用它。简而言之,它通过在抵达顺序中将服务员插入列表,并在解锁时将互斥锁传递给列表中的第一个服务员来实现公平。

答案 1 :(得分:1)

如果它有用:

这演示了*)使用信号量作为同步原语的“同步队列”的实现。

注意:实际实现使用通过GCD(Grand Central Dispatch)实现的信号量:

using gcd::mutex;
using gcd::semaphore;


// A blocking queue in which each put must wait for a get, and vice 
// versa. A synchronous queue does not have any internal capacity, 
// not even a capacity of one. 

template <typename T>
class simple_synchronous_queue {
public:

    typedef T value_type;

    enum result_type {
        OK = 0,
        TIMEOUT_NOT_DELIVERED = -1,
        TIMEOUT_NOT_PICKED = -2,
        TIMEOUT_NOTHING_OFFERED = -3
    };

    simple_synchronous_queue() 
    : sync_(0), send_(1), recv_(0)
    {
    }

    void put(const T& v) {
        send_.wait();
        new (address()) T(v);
        recv_.signal();
        sync_.wait();
    }

    result_type put(const T& v, double timeout) {
        if (send_.wait(timeout)) {
            new (storage_) T(v);
            recv_.signal();
            if (sync_.wait(timeout)) {
                return OK;
            }
            else {
                return TIMEOUT_NOT_PICKED;
            }
        }
        else {
            return TIMEOUT_NOT_DELIVERED;
        }        
    }

    T get() {
        recv_.wait();
        T result = *address();
        address()->~T();
        sync_.signal();
        send_.signal();
        return result;
    }

    std::pair<result_type, T> get(double timeout) {
        if (recv_.wait(timeout)) {
            std::pair<result_type, T> result = 
                std::pair<result_type, T>(OK, *address());
            address()->~T();
            sync_.signal();
            send_.signal();
            return result;
        }
        else {
            return std::pair<result_type, T>(TIMEOUT_NOTHING_OFFERED, T());
        }
    }    

private:
    using storage_t = typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type;

    T* address() { 
        return static_cast<T*>(static_cast<void*>(&storage_));
    }

    storage_t   storage_;
    semaphore   sync_;
    semaphore   send_;
    semaphore   recv_;
};

*)表明:仔细考虑潜在问题,可以改进等等......;)

答案 2 :(得分:0)

我接受了CouchDeveloper的答案,因为它指出了我正确的道路。我编写了一个特定于Windows的C ++ 11同步队列实现,并添加了这个答案,以便其他人可以考虑/使用它。如果他们愿意的话。

// SynchronousQueue.hpp
#ifndef SYNCHRONOUSQUEUE_HPP
#define SYNCHRONOUSQUEUE_HPP

#include <atomic>
#include <exception>
#include <windows>

using namespace std;

class CouldNotEnterException: public exception {};
class NoPairedCallException: public exception {};

template typename<T>
class SynchronousQueue {
    private:
        atomic_bool valueReady {false};

        CRITICAL_SECTION getCriticalSection;
        CRITICAL_SECTION putCriticalSection;

        DWORD wait {0};

        HANDLE getSemaphore;
        HANDLE putSemaphore;

        const T* address {nullptr};

    public:
        SynchronousQueue(DWORD waitMS): wait {waitMS}, address {nullptr} {
            initializeCriticalSection(&getCriticalSection);
            initializeCriticalSection(&putCriticalSection);

            getSemaphore = CreateSemaphore(nullptr, 0, 1, nullptr);
            putSemaphore = CreateSemaphore(nullptr, 0, 1, nullptr);
        }

        ~SynchronousQueue() {
            EnterCriticalSection(&getCriticalSection);
            EnterCriticalSection(&putCriticalSection);

            CloseHandle(getSemaphore);
            CloseHandle(putSemaphore);

            DeleteCriticalSection(&putCriticalSection);
            DeleteCriticalSection(&getCriticalSection);
        }

        void put(const T& value) {
            if (!TryEnterCriticalSection(&putCriticalSection)) {
                throw CouldNotEnterException();
            }

            ReleaseSemaphore(putSemaphore, (LONG) 1, nullptr);

            if (WaitForSingleObject(getSemaphore, wait) != WAIT_OBJECT_0) {
                if (WaitForSingleObject(putSemaphore, 0) == WAIT_OBJECT_0) {
                    LeaveCriticalSection(&putCriticalSection);
                    throw NoPairedCallException();
                } else {
                    WaitForSingleObject(getSemaphore, 0);
                }
            }

            address = &value;
            valueReady = true;
            while (valueReady);

            LeaveCriticalSection(&putCriticalSection);
        }

        T get() {
            if (!TryEnterCriticalSection(&getCriticalSection)) {
                throw CouldNotEnterException();
            }

            ReleaseSemaphore(getSemaphore, (LONG) 1, nullptr);

            if (WaitForSingleObject(putSemaphore, wait) != WAIT_OBJECT_0) {
                if (WaitForSingleObject(getSemaphore, 0) == WAIT_OBJECT_0) {
                    LeaveCriticalSection(&getCriticalSection);
                    throw NoPairedCallException();
                } else {
                    WaitForSingleObject(putSemaphore, 0);
                }
            }

            while (!valueReady);
            T toReturn = *address;
            valueReady = false;

            LeaveCriticalSection(&getCriticalSection);

            return toReturn;
        }
};

#endif