我正在为我的ubuntu服务器实现一个简单的线程池机制(对于我的多客户端匿名聊天程序),我需要让我的工作线程休眠,直到一个作业(以函数指针和参数的形式)需要要进行。
我现在的系统正在窗外。我(工作人员的线程是)询问经理是否有工作,如果没有睡眠5ms。如果有,请将作业添加到工作队列并运行该功能。可怜的浪费周期。
我喜欢做的是制作一个类似事件的简单系统。我正在考虑使用互斥体向量(每个工作一个),并在创建时将传入的互斥锁作为参数传递。然后在我的经理类(保存并分发作业)中,每当创建一个线程时,锁定互斥锁。当需要执行作业时,解锁下一个互斥锁,等待它被锁定和解锁,并重新锁定它。但是我想知道是否有更好的方法来实现这一目标。
tldr; 所以我的问题是这个。什么是使线程等待管理类工作的最有效,最有效和最安全的方法?轮询我应该考虑的技术(一次超过1000个客户端),互斥锁是否正常?还是有其他技术吗?
答案 0 :(得分:6)
您需要的是条件变量 所有工作线程都调用wait()来挂起它们。
父线程然后将工作项放在队列上并在条件变量上调用信号。这将唤醒一个正在睡觉的线程。它可以从队列中删除作业执行作业,然后在条件变量上调用wait以重新进入休眠状态。
尝试:
#include <pthread.h>
#include <memory>
#include <list>
// Use RAII to do the lock/unlock
struct MutexLock
{
MutexLock(pthread_mutex_t& m) : mutex(m) { pthread_mutex_lock(&mutex); }
~MutexLock() { pthread_mutex_unlock(&mutex); }
private:
pthread_mutex_t& mutex;
};
// The base class of all work we want to do.
struct Job
{
virtual void doWork() = 0;
};
// pthreads is a C library the call back must be a C function.
extern "C" void* threadPoolThreadStart(void*);
// The very basre minimal part of a thread pool
// It does not create the workers. You need to create the work threads
// then make them call workerStart(). I leave that as an exercise for you.
class ThreadPool
{
public:
ThreadPool(unsigned int threadCount=1);
~ThreadPool();
void addWork(std::auto_ptr<Job> job);
private:
friend void* threadPoolThreadStart(void*);
void workerStart();
std::auto_ptr<Job> getJob();
bool finished; // Threads will re-wait while this is true.
pthread_mutex_t mutex; // A lock so that we can sequence accesses.
pthread_cond_t cond; // The condition variable that is used to hold worker threads.
std::list<Job*> workQueue; // A queue of jobs.
std::vector<pthread_t>threads;
};
// Create the thread pool
ThreadPool::ThreadPool(int unsigned threadCount)
: finished(false)
, threads(threadCount)
{
// If we fail creating either pthread object than throw a fit.
if (pthread_mutex_init(&mutex, NULL) != 0)
{ throw int(1);
}
if (pthread_cond_init(&cond, NULL) != 0)
{
pthread_mutex_destroy(&mutex);
throw int(2);
}
for(unsigned int loop=0; loop < threadCount;++loop)
{
if (pthread_create(threads[loop], NULL, threadPoolThreadStart, this) != 0)
{
// One thread failed: clean up
for(unsigned int kill = loop -1; kill < loop /*unsigned will wrap*/;--kill)
{
pthread_kill(threads[kill], 9);
}
throw int(3);
}
}
}
// Cleanup any left overs.
// Note. This does not deal with worker threads.
// You need to add a method to flush all worker threads
// out of this pobject before you let the destructor destroy it.
ThreadPool::~ThreadPool()
{
finished = true;
for(std::vector<pthread_t>::iterator loop = threads.begin();loop != threads.end(); ++loop)
{
// Send enough signals to free all threads.
pthread_cond_signal(&cond);
}
for(std::vector<pthread_t>::iterator loop = threads.begin();loop != threads.end(); ++loop)
{
// Wait for all threads to exit (they will as finished is true and
// we sent enough signals to make sure
// they are running).
void* result;
pthread_join(*loop, &result);
}
// Destroy the pthread objects.
pthread_cond_destroy(&cond);
pthread_mutex_destroy(&mutex);
// Delete all re-maining jobs.
// Notice how we took ownership of the jobs.
for(std::list<Job*>::const_iterator loop = workQueue.begin(); loop != workQueue.end();++loop)
{
delete *loop;
}
}
// Add a new job to the queue
// Signal the condition variable. This will flush a waiting worker
// otherwise the job will wait for a worker to finish processing its current job.
void ThreadPool::addWork(std::auto_ptr<Job> job)
{
MutexLock lock(mutex);
workQueue.push_back(job.release());
pthread_cond_signal(&cond);
}
// Start a thread.
// Make sure no exceptions escape as that is bad.
void* threadPoolThreadStart(void* data)
{
ThreadPool* pool = reinterpret_cast<ThreadPool*>(workerStart);
try
{
pool->workerStart();
}
catch(...){}
return NULL;
}
// This is the main worker loop.
void ThreadPool::workerStart()
{
while(!finished)
{
std::auto_ptr<Job> job = getJob();
if (job.get() != NULL)
{
job->doWork();
}
}
}
// The workers come here to get a job.
// If there are non in the queue they are suspended waiting on cond
// until a new job is added above.
std::auto_ptr<Job> ThreadPool::getJob()
{
MutexLock lock(mutex);
while((workQueue.empty()) && (!finished))
{
pthread_cond_wait(&cond, &mutex);
// The wait releases the mutex lock and suspends the thread (until a signal).
// When a thread wakes up it is help until it can acquire the mutex so when we
// get here the mutex is again locked.
//
// Note: You must use while() here. This is because of the situation.
// Two workers: Worker A processing job A.
// Worker B suspended on condition variable.
// Parent adds a new job and calls signal.
// This wakes up thread B. But it is possible for Worker A to finish its
// work and lock the mutex before the Worker B is released from the above call.
//
// If that happens then Worker A will see that the queue is not empty
// and grab the work item in the queue and start processing. Worker B will
// then lock the mutext and proceed here. If the above is not a while then
// it would try and remove an item from an empty queue. With a while it sees
// that the queue is empty and re-suspends on the condition variable above.
}
std::auto_ptr<Job> result;
if (!finished)
{ result.reset(workQueue.front());
workQueue.pop_front();
}
return result;
}
答案 1 :(得分:3)
实现此方法的通常方法是使队列queue
完成工作,保护队列的互斥mutex
和等待条件queue_not_empty
。然后,每个工作线程执行以下操作(使用伪api):
while (true) {
Work * work = 0;
mutex.lock();
while ( queue.empty() )
if ( !queue_not_empty.wait( &mutex, timeout ) )
return; // timeout - exit the worker thread
work = queue.front();
queue.pop_front();
mutex.unlock();
work->perform();
}
wait( &mutex, timeout )
调用阻塞,直到发出等待条件或呼叫超时。传递的mutex
在wait()
内原子解锁,并在从调用返回之前再次锁定,以向所有参与者提供队列的一致视图。 timeout
会被选择为相当大(秒),并且会导致线程退出(如果有更多的工作进入,线程池会启动新的线程)。
同时,线程池的工作插入功能执行此操作:
Work * work = ...;
mutex.lock();
queue.push_back( work );
if ( worker.empty() )
start_a_new_worker();
queue_not_empty.wake_one();
mutex.unlock();
答案 2 :(得分:2)
与多个使用者的经典生产者 - 消费者同步(工作者线程消耗工作请求)。众所周知的技术是拥有一个信号量,每个工作线程都down()
,每次有工作请求时,都要up()
。然后从互斥锁工作队列中选择请求。由于一个up()
只会唤醒一个down()
,因此互斥锁上的争用实际上会很小。
或者你可以对条件变量做同样的事情,在每个线程中等待,并在你有工作时唤醒它。队列本身仍然使用互斥锁(condvar无论如何需要一个)。
最后我不完全确定,但实际上我认为你实际上可以使用管道作为包括所有同步的队列(工作线程只是试图“读取(sizeof(请求))”)。有点hacky,但导致更少的上下文切换。
答案 3 :(得分:2)
由于网络聊天程序可能是I / O绑定而不是CPU绑定,因此您并不真正需要线程。您可以使用Boost.Asio或GLib main loop等工具在单个线程中处理所有I / O.这些是特定于平台的函数的可移植抽象,允许程序阻止等待(可能很大)一组打开文件或套接字的任何上的活动,然后在活动发生时立即唤醒并做出响应
答案 4 :(得分:1)
最简单的方法是semaphores
。这就是信号量的工作原理:
信号量基本上是一个取零/正值的变量。进程可以通过两种方式与其进行交互:增加或减少信号量。
增加信号量会为这个神奇变量增加1 ,就是这样。 正在减少事情变得有趣的计数:如果计数达到零并且进程试图再次降低它,因为它不能取负值,它将阻塞直到变量上升强>
如果多个进程块正在等待降低信号量值,则每个单元只会唤醒一个计数值。
这使得创建工作人员/任务系统变得非常容易:您的经理进程将任务排队并增加信号量的值以匹配剩余项目,并且您的工作进程会尝试减少计数并不断获取任务。当没有可用的任务时,它们将阻塞,并且不消耗cpu时间。当出现一个时,只有一个休眠过程会唤醒。 Insta-sync magic。
不幸的是,至少在Unix世界中,信号量API不是很友好,因为由于某种原因它会处理sempahores数组而不是单个数组。但是,你是一个简单的包装器,远离一个漂亮的界面!
干杯!