使用sync.WaitGroup.wait时如何实现超时?

时间:2015-12-21 11:52:19

标签: go timeout

我遇到过一种情况,我想跟踪一些goroutine以在特定点上同步,例如当获取所有URL时。然后,我们可以将它们全部放入并按特定顺序显示它们。

我认为这是障碍。它位于go sync.WaitGroup。但是,在实际情况下,我们无法确保所有获取操作都能在短时间内成功。所以,我想在wait时为fetch操作引入超时。

我是Golang的新手,有人可以给我一些建议吗?

我要找的是这样的:

   wg := &sync.WaigGroup{}
   select {
   case <-wg.Wait():
   // All done!
   case <-time.After(500 * time.Millisecond):
   // Hit timeout.
   }

我知道Wait不支持Channel

4 个答案:

答案 0 :(得分:18)

如果你想要的只是你的整洁选择,你可以通过产生调用方法的例程并在完成后关闭/发送通道,轻松地将阻塞功能转换为通道。

done := make(chan struct{})
go func() {
   wg.Wait()
   close(done)
}()

select {
case <-done:
// All done!
case <-time.After(500 * time.Millisecond):
// Hit timeout.
}

答案 1 :(得分:1)

另一种方法是在内部进行监控,你的问题是有限的,但我会假设你是通过循环开始你的goroutines,即使你不是你可以重构这个为你工作但是您可以执行以下两个示例中的一个,第一个将超时每个请求单独超时,第二个将超时整批请求,如果已经过了太多时间则继续

var wg sync.WaitGroup
wg.Add(1)
go func() {
    success := make(chan struct{}, 1)
    go func() {
        // send your request and wait for a response
        // pretend response was received
        time.Sleep(5 * time.Second)
        success <- struct{}{}
        // goroutine will close gracefully after return     
        fmt.Println("Returned Gracefully")
    }()

    select {
    case <-success:
        break
    case <-time.After(1 * time.Second):
        break
    }

    wg.Done()
    // everything should be garbage collected and no longer take up space
}()

wg.Wait()

// do whatever with what you got    
fmt.Println("Done")
time.Sleep(10 * time.Second)
fmt.Println("Checking to make sure nothing throws errors after limbo goroutine is done")

或者,如果您只想要一个简单的方法来超时所有请求,您可以执行类似

的操作
var wg sync.WaitGroup
waiter := make(chan int)
wg.Add(1)
go func() {
    success := make(chan struct{}, 1)
    go func() {
        // send your request and wait for a response
        // pretend response was received
        time.Sleep(5 * time.Second)
        success <- struct{}{}
        // goroutine will close gracefully after return     
        fmt.Println("Returned Gracefully")
    }()

    select {
    case <-success:
        break
    case <-time.After(1 * time.Second):
        // control the timeouts for each request individually to make sure that wg.Done gets called and will let the goroutine holding the .Wait close
        break
    }
    wg.Done()
    // everything should be garbage collected and no longer take up space
}()

completed := false
go func(completed *bool) {
    // Unblock with either wait
    wg.Wait()
    if !*completed {
        waiter <- 1         
        *completed = true
    }       
    fmt.Println("Returned Two")
}(&completed)

go func(completed *bool) {
    // wait however long
    time.Sleep(time.Second * 5)
    if !*completed {
        waiter <- 1         
        *completed = true
    }       
    fmt.Println("Returned One")
}(&completed)


 // block until it either times out or .Wait stops blocking 
 <-waiter

// do whatever with what you got    
fmt.Println("Done")
time.Sleep(10 * time.Second)
fmt.Println("Checking to make sure nothing throws errors after limbo goroutine is done")

通过这种方式,您的WaitGroup将保持同步状态,您将不会有任何goroutine处于停滞状态

http://play.golang.org/p/g0J_qJ1BUT在此尝试,您可以更改变量以查看其工作方式不同

编辑:我在移动设备上如果有人可以修复格式化,那将非常感谢。

答案 2 :(得分:1)

将结果发送到缓冲通道,足以在不阻塞的情况下获取所有结果,并在主线程的for-select循环中读取它们:

func work(msg string, d time.Duration, ret chan<- string) {
    time.Sleep(d) // Work emulation.
    select {
    case ret <- msg:
    default:
    }
}

// ...

const N = 2
ch := make(chan string, N)

go work("printed", 100*time.Millisecond, ch)
go work("not printed", 1000*time.Millisecond, ch)

timeout := time.After(500 * time.Millisecond)
loop:
for received := 0; received < N; received++ {
    select {
    case msg := <-ch:
        fmt.Println(msg)
    case <-timeout:
        fmt.Println("timeout!")
        break loop
    }
}

游乐场:http://play.golang.org/p/PxeEEJo2dz

另请参阅:Go Concurrency Patterns: Timing out, moving on

答案 3 :(得分:0)

如果您想避免将并发逻辑与业务逻辑混合,我编写了这个库https://github.com/shomali11/parallelizer来帮助您。它封装了并发逻辑,因此您不必担心它。

所以在你的例子中:

package main

import (
    "github.com/shomali11/parallelizer"
    "fmt"
)

func main() {
    urls := []string{ ... }
    results = make([]*HttpResponse, len(urls)

    options := &Options{ Timeout: time.Second }
    group := parallelizer.NewGroup(options)
    for index, url := range urls {
        group.Add(func(index int, url string, results *[]*HttpResponse) {
            return func () {
                ...

                results[index] = &HttpResponse{url, response, err}
            }
        }(index, url, &results))
    }

    err := group.Run()

    fmt.Println("Done")
    fmt.Println(fmt.Sprintf("Results: %v", results))
    fmt.Printf("Error: %v", err) // nil if it completed, err if timed out
}