我想实现一个遵循大致接口的生产者/消费者场景:
class Consumer {
private:
vector<char> read(size_t n) {
// If the internal buffer has `n` elements, then dequeue them
// Otherwise wait for more data and try again
}
public:
void run() {
read(10);
read(4839);
// etc
}
void feed(const vector<char> &more) {
// Safely queue the data
// Notify `read` that there is now more data
}
};
在这种情况下,feed
和run
将在不同的主题上运行,read
应该是阻止读取(例如recv
和fread
)。显然,我需要在我的双端队列中进行某种互斥,我需要某种通知系统来通知read
再试一次。
我听说条件变量是要走的路,但我所有的多线程体验都在于Windows,并且我很难绕过它们。
感谢您的帮助!
(是的,我知道返回向量是没有效率的。让我们不要进入那个。)
答案 0 :(得分:8)
此代码未准备好生产。 没有对任何库调用的结果进行错误检查。
我已经在LockThread中锁定/解锁了互斥锁,因此它是异常安全的。但就是这样。
此外,如果我认真地这样做,我会将互斥锁和条件变量包装在对象中,这样它们就可以在Consumer的其他方法中被滥用。但只要您注意到在使用条件变量之前必须获取锁(以任何方式),那么这种简单的情况可以保持原样。
您有兴趣检查了增强线程库吗?
#include <iostream>
#include <vector>
#include <pthread.h>
class LockThread
{
public:
LockThread(pthread_mutex_t& m)
:mutex(m)
{
pthread_mutex_lock(&mutex);
}
~LockThread()
{
pthread_mutex_unlock(&mutex);
}
private:
pthread_mutex_t& mutex;
};
class Consumer
{
pthread_mutex_t lock;
pthread_cond_t cond;
std::vector<char> unreadData;
public:
Consumer()
{
pthread_mutex_init(&lock,NULL);
pthread_cond_init(&cond,NULL);
}
~Consumer()
{
pthread_cond_destroy(&cond);
pthread_mutex_destroy(&lock);
}
private:
std::vector<char> read(size_t n)
{
LockThread locker(lock);
while (unreadData.size() < n)
{
// Must wait until we have n char.
// This is a while loop because feed may not put enough in.
// pthread_cond() releases the lock.
// Thread will not be allowed to continue until
// signal is called and this thread reacquires the lock.
pthread_cond_wait(&cond,&lock);
// Once released from the condition you will have re-aquired the lock.
// Thus feed() must have exited and released the lock first.
}
/*
* Not sure if this is exactly what you wanted.
* But the data is copied out of the thread safe buffer
* into something that can be returned.
*/
std::vector<char> result(n); // init result with size n
std::copy(&unreadData[0],
&unreadData[n],
&result[0]);
unreadData.erase(unreadData.begin(),
unreadData.begin() + n);
return (result);
}
public:
void run()
{
read(10);
read(4839);
// etc
}
void feed(const std::vector<char> &more)
{
LockThread locker(lock);
// Once we acquire the lock we can safely modify the buffer.
std::copy(more.begin(),more.end(),std::back_inserter(unreadData));
// Only signal the thread if you have the lock
// Otherwise race conditions happen.
pthread_cond_signal(&cond);
// destructor releases the lock and thus allows read thread to continue.
}
};
int main()
{
Consumer c;
}
答案 1 :(得分:2)
我倾向于使用我称之为“Syncronized Queue”的东西。我将正常队列包装起来并使用Semaphore类进行锁定并按照您的意愿制作读取块:
#ifndef SYNCQUEUE_20061005_H_
#define SYNCQUEUE_20061005_H_
#include <queue>
#include "Semaphore.h"
// similar, but slightly simpler interface to std::queue
// this queue implementation will serialize pushes and pops
// and block on a pop while empty (as apposed to throwing an exception)
// it also locks as neccessary on insertion and removal to avoid race
// conditions
template <class T, class C = std::deque<T> > class SyncQueue {
protected:
std::queue<T, C> m_Queue;
Semaphore m_Semaphore;
Mutex m_Mutex;
public:
typedef typename std::queue<T, C>::value_type value_type;
typedef typename std::queue<T, C>::size_type size_type;
explicit SyncQueue(const C& a = C()) : m_Queue(a), m_Semaphore(0) {}
bool empty() const { return m_Queue.empty(); }
size_type size() const { return m_Queue.size(); }
void push(const value_type& x);
value_type pop();
};
template <class T, class C>
void SyncQueue<T, C>::push(const SyncQueue<T, C>::value_type &x) {
// atomically push item
m_Mutex.lock();
m_Queue.push(x);
m_Mutex.unlock();
// let blocking semaphore know another item has arrived
m_Semaphore.v();
}
template <class T, class C>
typename SyncQueue<T, C>::value_type SyncQueue<T, C>::pop() {
// block until we have at least one item
m_Semaphore.p();
// atomically read and pop front item
m_Mutex.lock();
value_type ret = m_Queue.front();
m_Queue.pop();
m_Mutex.unlock();
return ret;
}
#endif
您可以在线程实现中使用适当的基元实现信号量和互斥量。
注意:此实现是队列中单个元素的示例,但您可以使用缓冲结果的函数轻松地将其包装,直到提供N为止。如果它是一个chars队列,那就是这样的东西:
std::vector<char> func(int size) {
std::vector<char> result;
while(result.size() != size) {
result.push_back(my_sync_queue.pop());
}
return result;
}
答案 2 :(得分:1)
我会抛弃一些半伪代码。以下是我的评论:
1)这里有非常大的锁定颗粒。如果您需要更快的访问权限,则需要重新考虑数据结构。 STL不是线程安全的。
2)锁定将阻止,直到互斥锁让它通过。互斥结构是它通过锁定/解锁机制一次让1个线程穿过它。不需要轮询或某种异常结构。
3)这是一个非常语法上的hacky问题。我对API和C ++语法并不精确,但我相信它提供了语义上正确的解决方案。
4)编辑回应评论。
class piper
{
pthread_mutex queuemutex;
pthread_mutex readymutex;
bool isReady; //init to false by constructor
//whatever else
};
piper::read()
{//whatever
pthread_mutex_lock(&queuemutex)
if(myqueue.size() >= n)
{
return_queue_vector.push_back(/* you know what to do here */)
pthread_mutex_lock(&readymutex)
isReady = false;
pthread_mutex_unlock(&readymutex)
}
pthread_mutex_unlock(&queuemutex)
}
piper::push_em_in()
{
//more whatever
pthread_mutex_lock(&queuemutex)
//push push push
if(myqueue.size() >= n)
{
pthread_mutex_lock(&readymutex)
isReady = true;
pthread_mutex_unlock(&readymutex)
}
pthread_mutex_unlock(&queuemutex)
}
答案 3 :(得分:1)
只是为了好玩,这是一个使用Boost的快速而肮脏的实现。它在支持它的平台上使用pthreads,在windows上使用windows操作。
boost::mutex access;
boost::condition cond;
// consumer
data read()
{
boost::mutex::scoped_lock lock(access);
// this blocks until the data is ready
cond.wait(lock);
// queue is ready
return data_from_queue();
}
// producer
void push(data)
{
boost::mutex::scoped_lock lock(access);
// add data to queue
if (queue_has_enough_data())
cond.notify_one();
}
答案 4 :(得分:1)
为了更有趣,这是我的最终版本。 STL化是没有充分理由的。 : - )
#include <algorithm>
#include <deque>
#include <pthread.h>
template<typename T>
class MultithreadedReader {
std::deque<T> buffer;
pthread_mutex_t moreDataMutex;
pthread_cond_t moreDataCond;
protected:
template<typename OutputIterator>
void read(size_t count, OutputIterator result) {
pthread_mutex_lock(&moreDataMutex);
while (buffer.size() < count) {
pthread_cond_wait(&moreDataCond, &moreDataMutex);
}
std::copy(buffer.begin(), buffer.begin() + count, result);
buffer.erase(buffer.begin(), buffer.begin() + count);
pthread_mutex_unlock(&moreDataMutex);
}
public:
MultithreadedReader() {
pthread_mutex_init(&moreDataMutex, 0);
pthread_cond_init(&moreDataCond, 0);
}
~MultithreadedReader() {
pthread_cond_destroy(&moreDataCond);
pthread_mutex_destroy(&moreDataMutex);
}
template<typename InputIterator>
void feed(InputIterator first, InputIterator last) {
pthread_mutex_lock(&moreDataMutex);
buffer.insert(buffer.end(), first, last);
pthread_cond_signal(&moreDataCond);
pthread_mutex_unlock(&moreDataMutex);
}
};
答案 5 :(得分:0)
Glib异步队列在读取您正在寻找的空队列时提供锁定和休眠。请参阅http://library.gnome.org/devel/glib/2.20/glib-Asynchronous-Queues.html您可以将它们与gthreads或gthread池结合使用。