我想用Go从雅虎财经下载股票价格电子表格。我将为自己的goroutine中的每个股票发出http请求。我有一个大约2500个符号的列表,但不是并行地发出2500个请求,我宁愿一次做250个。在Java中,我创建了一个线程池,并在它们获得空闲时重用它们。我试图找到类似的东西,一个goroutine池,如果你愿意,但无法找到任何资源。如果有人能告诉我如何完成手头的任务或者为我指出相同的资源,我会很感激。谢谢!
答案 0 :(得分:46)
我想,最简单的方法就是创建250个goroutines,并为它们传递一个通道,您可以使用该通道将主goroutine中的链接传递给子节点,并监听该通道。
当所有链接都传递给goroutines时,你关闭一个频道,所有goroutine就完成了他们的工作。
为了确保自己免受主要goroutine的影响,请在儿童处理数据之前完成,您可以使用sync.WaitGroup
。
以上是一些代码(不是最终工作版本,但显示了重点),我在上面说过:
func worker(linkChan chan string, wg *sync.WaitGroup) {
// Decreasing internal counter for wait-group as soon as goroutine finishes
defer wg.Done()
for url := range linkChan {
// Analyze value and do the job here
}
}
func main() {
lCh := make(chan string)
wg := new(sync.WaitGroup)
// Adding routines to workgroup and running then
for i := 0; i < 250; i++ {
wg.Add(1)
go worker(lCh, wg)
}
// Processing all links by spreading them to `free` goroutines
for _, link := range yourLinksSlice {
lCh <- link
}
// Closing channel (waiting in goroutines won't continue any more)
close(lCh)
// Waiting for all goroutines to finish (otherwise they die as main routine dies)
wg.Wait()
}
答案 1 :(得分:2)
您可以使用此git repo
中的Go
中的线程池实现库
Here是关于如何将频道用作线程池的好博客
博客摘录
var (
MaxWorker = os.Getenv("MAX_WORKERS")
MaxQueue = os.Getenv("MAX_QUEUE")
)
//Job represents the job to be run
type Job struct {
Payload Payload
}
// A buffered channel that we can send work requests on.
var JobQueue chan Job
// Worker represents the worker that executes the job
type Worker struct {
WorkerPool chan chan Job
JobChannel chan Job
quit chan bool
}
func NewWorker(workerPool chan chan Job) Worker {
return Worker{
WorkerPool: workerPool,
JobChannel: make(chan Job),
quit: make(chan bool)}
}
// Start method starts the run loop for the worker, listening for a quit channel in
// case we need to stop it
func (w Worker) Start() {
go func() {
for {
// register the current worker into the worker queue.
w.WorkerPool <- w.JobChannel
select {
case job := <-w.JobChannel:
// we have received a work request.
if err := job.Payload.UploadToS3(); err != nil {
log.Errorf("Error uploading to S3: %s", err.Error())
}
case <-w.quit:
// we have received a signal to stop
return
}
}
}()
}
// Stop signals the worker to stop listening for work requests.
func (w Worker) Stop() {
go func() {
w.quit <- true
}()
}
答案 2 :(得分:1)
此示例使用两个chanel,一个用于输入,另一个用于输出。工作人员可以扩展到任何大小,每个goroutine在输入队列上工作,并将所有输出保存到输出通道。非常欢迎对更简单方法的反馈。
package main
import (
"fmt"
"sync"
)
var wg sync.WaitGroup
func worker(input chan string, output chan string) {
defer wg.Done()
// Consumer: Process items from the input channel and send results to output channel
for value := range input {
output <- value + " processed"
}
}
func main() {
var jobs = []string{"one", "two", "three", "four", "two", "three", "four", "two", "three", "four", "two", "three", "four", "two", "three", "four", "two"}
input := make(chan string, len(jobs))
output := make(chan string, len(jobs))
workers := 250
// Increment waitgroup counter and create go routines
for i := 0; i < workers; i++ {
wg.Add(1)
go worker(input, output)
}
// Producer: load up input channel with jobs
for _, job := range jobs {
input <- job
}
// Close input channel since no more jobs are being sent to input channel
close(input)
// Wait for all goroutines to finish processing
wg.Wait()
// Close output channel since all workers have finished processing
close(output)
// Read from output channel
for result := range output {
fmt.Println(result)
}
}
答案 3 :(得分:1)
您可以看看this
我们在go中创建了一个线程池,并将其用于我们的生产系统。
我从here那里引用了
使用起来非常简单,还有一个Prometheus客户程序,可以告诉您使用了多少个工作人员。
要初始化,只需创建一个调度程序实例
dispatcher = workerpool.NewDispatcher(
"DispatcherName",
workerpool.SetMaxWorkers(10),
)
创建一个实现此接口的对象(比如说 job )。所以应该实现Process方法
// IJob : Interface for the Job to be processed
type IJob interface {
Process() error
}
然后将作业发送给调度员
dispatcher.JobQueue <- job //object of job
就这样。