事件/任务队列多线程C ++

时间:2009-05-29 00:43:08

标签: c++ multithreading queue pthreads

我想创建一个类,其方法可以从多个线程调用。但它不是在调用它的线程中执行该方法,而是应该在它自己的线程中执行它们。不需要返回任何结果,它不应该阻塞调用线程。

我在下面列出的第一次尝试实施。公共方法将函数指针和数据插入到作业队列中,然后工作线程接收该作业队列。然而,它并不是特别好的代码,添加新方法很麻烦。

理想情况下,我想将此作为基类使用,我可以轻松添加方法(具有可变数量的参数),并且具有最少的hastle和代码重复。

有什么更好的方法可以做到这一点?是否有任何类似的现有代码?感谢

#include <queue>

using namespace std;

class GThreadObject
{
    class event
    {
        public:
        void (GThreadObject::*funcPtr)(void *);
        void * data;
    };

public:
    void functionOne(char * argOne, int argTwo);

private:
    void workerThread();
    queue<GThreadObject::event*> jobQueue;
    void functionOneProxy(void * buffer);
    void functionOneInternal(char * argOne, int argTwo);

};



#include <iostream>
#include "GThreadObject.h"

using namespace std;

/* On a continuous loop, reading tasks from queue
 * When a new event is received it executes the attached function pointer
 * It should block on a condition, but Thread code removed to decrease clutter
 */
void GThreadObject::workerThread()
{
    //New Event added, process it
    GThreadObject::event * receivedEvent = jobQueue.front();

    //Execute the function pointer with the attached data
    (*this.*receivedEvent->funcPtr)(receivedEvent->data);
}

/*
 * This is the public interface, Can be called from child threads
 * Instead of executing the event directly it adds it to a job queue
 * Then the workerThread picks it up and executes all tasks on the same thread
 */
void GThreadObject::functionOne(char * argOne, int argTwo)
{

    //Malloc an object the size of the function arguments
    int argumentSize = sizeof(char*)+sizeof(int);
    void * myData = malloc(argumentSize);
    //Copy the data passed to this function into the buffer
    memcpy(myData, &argOne, argumentSize);

    //Create the event and push it on to the queue
    GThreadObject::event * myEvent = new event;
    myEvent->data = myData;
    myEvent->funcPtr = &GThreadObject::functionOneProxy;
    jobQueue.push(myEvent);

    //This would be send a thread condition signal, replaced with a simple call here
    this->workerThread();
}

/*
 * This handles the actual event
 */
void GThreadObject::functionOneInternal(char * argOne, int argTwo)
{
    cout << "We've made it to functionTwo char*:" << argOne << " int:" << argTwo << endl;

    //Now do the work
}

/*
 * This is the function I would like to remove if possible
 * Split the void * buffer into arguments for the internal Function
 */
void GThreadObject::functionOneProxy(void * buffer)
{
    char * cBuff = (char*)buffer;
    functionOneInternal((char*)*((unsigned int*)cBuff), (int)*(cBuff+sizeof(char*)));
};

int main()
{
    GThreadObject myObj;

    myObj.functionOne("My Message", 23);

    return 0;
}

8 个答案:

答案 0 :(得分:6)

Futures库进入Boost和C ++标准库。在ACE中也有类似的东西,但我不想向任何人推荐它(正如@lothar已经指出的那样,它是Active Object。)

答案 1 :(得分:2)

POCO库在线程部分中有一些名为ActiveMethod的行(以及一些相关的功能,例如ActiveResult)。源代码随时可用且易于理解。

答案 2 :(得分:2)

您可以使用Boost的Thread -library来解决这个问题。这样的事情(半假):


class GThreadObject
{
        ...

        public:
                GThreadObject()
                : _done(false)
                , _newJob(false)
                , _thread(boost::bind(>hreadObject::workerThread, this))
                {
                }

                ~GThreadObject()
                {
                        _done = true;

                        _thread.join();
                }

                void functionOne(char *argOne, int argTwo)
                {
                        ...

                        _jobQueue.push(myEvent);

                        {
                                boost::lock_guard l(_mutex);

                                _newJob = true;
                        }

                        _cond.notify_one();
                }

        private:
                void workerThread()
                {
                        while (!_done) {
                                boost::unique_lock l(_mutex);

                                while (!_newJob) {
                                        cond.wait(l);
                                }

                                Event *receivedEvent = _jobQueue.front();

                                ...
                        }
                }

        private:
                volatile bool             _done;
                volatile bool             _newJob;
                boost::thread             _thread;
                boost::mutex              _mutex;
                boost::condition_variable _cond;
                std::queue<Event*>        _jobQueue;
};

此外,请注意RAII如何让我们让这些代码更小,更好地管理。

答案 3 :(得分:1)

您可能对Active ObjectACE Patterns中的ACE framework感兴趣。

正如 Nikolai 指出futures planned对于标准C ++来说是{{3}}将来的某个时间(双关语)。

答案 4 :(得分:1)

对于可扩展性和可维护性(以及其他功能),您可以为线程要执行的“作业”定义抽象类(或接口)。然后,您的线程池的用户将实现此接口并将该对象的引用提供给线程池。这与Symbian活动对象设计非常相似:每个AO都是CActive的子类,必须实现Run()和Cancel()等方法。

为简单起见,您的界面(抽象类)可能非常简单:

class IJob
{
    virtual Run()=0;
};

然后线程池或接受请求的单线程会有类似的东西:

class CThread
{
   <...>
public:
   void AddJob(IJob* iTask);
   <...>
};

当然,你会有多个任务,可以拥有各种额外的设置者/获取者/属性,以及你在任何行业中需要的任何东西。但是,唯一必须实现方法Run(),它将执行冗长的计算:

class CDumbLoop : public IJob
{
public:
    CDumbJob(int iCount) : m_Count(iCount) {};
    ~CDumbJob() {};
    void Run()
    {
        // Do anything you want here
    }
private:
    int m_Count;
};

答案 5 :(得分:1)

这是我为类似目的而编写的一个类(我将它用于事件处理,但您当然可以将其重命名为ActionQueue - 并重命名其方法)。

你这样使用它:

使用您想要呼叫的功能:void foo (const int x, const int y) { /*...*/ }

并且:EventQueue q;

q.AddEvent(boost :: bind(foo,10,20));

在工作线程中

q.PlayOutEvents();

注意:在条件允许的情况下添加代码应该相当容易,以避免耗尽CPU周期。

代码(Visual Studio 2003 with boost 1.34.1):

#pragma once

#include <boost/thread/recursive_mutex.hpp>
#include <boost/function.hpp>
#include <boost/signals.hpp>
#include <boost/bind.hpp>
#include <boost/foreach.hpp>
#include <string>
using std::string;


// Records & plays out actions (closures) in a safe-thread manner.

class EventQueue
{
    typedef boost::function <void ()> Event;

public:

    const bool PlayOutEvents ()
    {
        // The copy is there to ensure there are no deadlocks.
        const std::vector<Event> eventsCopy = PopEvents ();

        BOOST_FOREACH (const Event& e, eventsCopy)
        {
            e ();
            Sleep (0);
        }

        return eventsCopy.size () > 0;
    }

    void AddEvent (const Event& event)
    {
        Mutex::scoped_lock lock (myMutex);

        myEvents.push_back (event);
    }

protected:

    const std::vector<Event> PopEvents ()
    {
        Mutex::scoped_lock lock (myMutex);

        const std::vector<Event> eventsCopy = myEvents;
        myEvents.clear ();

        return eventsCopy;
    }

private:

    typedef boost::recursive_mutex Mutex;
    Mutex myMutex;

    std::vector <Event> myEvents;

};

我希望这会有所帮助。 :)

Martin Bilski

答案 6 :(得分:1)

下面是一个不需要“functionProxy”方法的实现。即使添加新方法更容易,但它仍然很混乱。

Boost :: Bind和“Futures”看起来似乎会整理很多这样的东西。我想我会看看增强代码,看看它是如何工作的。感谢大家的建议。

GThreadObject.h

#include <queue>

using namespace std;

class GThreadObject
{

    template <int size>
    class VariableSizeContainter
    {
        char data[size];
    };

    class event
    {
        public:
        void (GThreadObject::*funcPtr)(void *);
        int dataSize;
        char * data;
    };

public:
    void functionOne(char * argOne, int argTwo);
    void functionTwo(int argTwo, int arg2);


private:
    void newEvent(void (GThreadObject::*)(void*), unsigned int argStart, int argSize);
    void workerThread();
    queue<GThreadObject::event*> jobQueue;
    void functionTwoInternal(int argTwo, int arg2);
    void functionOneInternal(char * argOne, int argTwo);

};

GThreadObject.cpp

#include <iostream>
#include "GThreadObject.h"

using namespace std;

/* On a continuous loop, reading tasks from queue
 * When a new event is received it executes the attached function pointer
 * Thread code removed to decrease clutter
 */
void GThreadObject::workerThread()
{
    //New Event added, process it
    GThreadObject::event * receivedEvent = jobQueue.front();

    /* Create an object the size of the stack the function is expecting, then cast the function to accept this object as an argument.
     * This is the bit i would like to remove
     * Only supports 8 byte argument size e.g 2 int's OR pointer + int OR myObject8bytesSize
     * Subsequent data sizes would need to be added with an else if
     * */
    if (receivedEvent->dataSize == 8)
    {
        const int size = 8;

        void (GThreadObject::*newFuncPtr)(VariableSizeContainter<size>);
        newFuncPtr = (void (GThreadObject::*)(VariableSizeContainter<size>))receivedEvent->funcPtr;

        //Execute the function
        (*this.*newFuncPtr)(*((VariableSizeContainter<size>*)receivedEvent->data));
    }

    //Clean up
    free(receivedEvent->data);
    delete receivedEvent;

}

void GThreadObject::newEvent(void (GThreadObject::*funcPtr)(void*), unsigned int argStart, int argSize)
{

    //Malloc an object the size of the function arguments
    void * myData = malloc(argSize);
    //Copy the data passed to this function into the buffer
    memcpy(myData, (char*)argStart, argSize);

    //Create the event and push it on to the queue
    GThreadObject::event * myEvent = new event;
    myEvent->data = (char*)myData;
    myEvent->dataSize = argSize;
    myEvent->funcPtr = funcPtr;
    jobQueue.push(myEvent);

    //This would be send a thread condition signal, replaced with a simple call here
    this->workerThread();

}

/*
 * This is the public interface, Can be called from child threads
 * Instead of executing the event directly it adds it to a job queue
 * Then the workerThread picks it up and executes all tasks on the same thread
 */
void GThreadObject::functionOne(char * argOne, int argTwo)
{
    newEvent((void (GThreadObject::*)(void*))&GThreadObject::functionOneInternal, (unsigned int)&argOne, sizeof(char*)+sizeof(int));
}

/*
 * This handles the actual event
 */
void GThreadObject::functionOneInternal(char * argOne, int argTwo)
{
    cout << "We've made it to functionOne Internal char*:" << argOne << " int:" << argTwo << endl;

    //Now do the work
}

void GThreadObject::functionTwo(int argOne, int argTwo)
{
    newEvent((void (GThreadObject::*)(void*))&GThreadObject::functionTwoInternal, (unsigned int)&argOne, sizeof(int)+sizeof(int));
}

/*
 * This handles the actual event
 */
void GThreadObject::functionTwoInternal(int argOne, int argTwo)
{
    cout << "We've made it to functionTwo Internal arg1:" << argOne << " int:" << argTwo << endl;
}

的main.cpp

#include <iostream>
#include "GThreadObject.h"

int main()
{

    GThreadObject myObj;

    myObj.functionOne("My Message", 23);
    myObj.functionTwo(456, 23);


    return 0;
}

编辑:为了完整起见,我使用Boost :: bind进行了实现。主要差异:

queue<boost::function<void ()> > jobQueue;

void GThreadObjectBoost::functionOne(char * argOne, int argTwo)
{
    jobQueue.push(boost::bind(&GThreadObjectBoost::functionOneInternal, this, argOne, argTwo));

    workerThread();
}

void GThreadObjectBoost::workerThread()
{
    boost::function<void ()> func = jobQueue.front();
    func();
}

使用boost实现10,000,000迭代的functionOne()需要大约19秒。然而,非升压实现仅需约6.5秒。所以大约慢了3倍。我猜想找到一个好的非锁定队列将是这里最大的性能瓶颈。但它仍然有很大的不同。

答案 7 :(得分:0)

您应该查看Boost ASIO库。它旨在异步调度事件。它可以与Boost Thread库配对,以构建您描述的系统。

您需要实例化单个boost::asio::io_service对象并安排一系列异步事件(boost::asio::io_service::postboost::asio::io_service::dispatch)。接下来,从 n 线程调用run成员函数。 io_service对象是线程安全的,并保证只在您调用io_service::run的线程中调度异步处理程序。

boost::asio::strand对象对简单线程同步也很有用。

对于它的价值,我认为ASIO库是解决这个问题的一个非常优雅的解决方案。