如何同时运行多个复制操作时限制连接数?

时间:2018-04-09 03:36:42

标签: azure go tcp azure-storage

我有同时运行200个goroutines的代码。每个goroutine都会串行发出Copy个操作。每个Could not find a version that satisfies the requirement matplotlib (from versions: ) No matching distribution found for matplotlib 都会发出两个请求:

  1. StartCopy(PUT请求)
  2. WaitForCopy(HEAD请求)
  3. 在每个http请求之后,调用readAndCloseBody,这对于确保重用连接很重要。此外,我已将传输设置为http.DefaultTransport,并将CopyMaxIdleConnsPerHost设置为200,这很重要,因为它默认为MaxIdleConns == 2.

    尽管如此,当我运行DefaultMaxConnsPerHost时,我经常会看到很多超过200个 TCP(ESTABLISHED)条目 - 有时最多1000条。以下是一个例子:< / p>

    lsof

    知道连接可能泄漏的地方/如何限制连接数?

    main 24217 jsmith 377u IPv4 0x364ecf4bed03d3d 0t0 TCP :50248->51.4.208.24:https (ESTABLISHED)
    

    我在OSX 10.3.2上运行go1.10并在56332fe使用package main import ( "net/http" "sync" "time" "github.com/Azure/azure-sdk-for-go/storage" ) func main() { // create and initialize a client client, _ := storage.NewBasicClient(ACCOUNT_NAME, ACCOUNT_KEY) // use DefaultTransport with configs that limit conns client.HTTPClient.Transport = http.DefaultTransport transp, _ := client.HTTPClient.Transport.(*http.Transport) client.HTTPClient.Transport = http.DefaultTransport transp.MaxIdleConnsPerHost = 200 transp.MaxIdleConns = 200 rootClient := client.GetBlobService().GetContainerReference("foo") numRoutines := 200 numRequestsPerRoutine := 100 var wg sync.WaitGroup for i := 0; i < numRoutines; i++ { wg.Add(1) go func() { for j := 0; j < numRequestsPerRoutine; j++ { sourceBlob := rootClient.GetBlobReference("fileA") destBlob := rootClient.GetBlobReference("dirX/fileA") destBlob.Copy(sourceBlob.GetURL(), nil) time.Sleep(250 * time.Millisecond) } wg.Done() }() } wg.Wait() }

1 个答案:

答案 0 :(得分:0)

有一个connection leak in the azure library。该修补程序已合并到dev中,可能会包含在下一个版本中。