限制gocolly一次处理有限数量的网址

时间:2018-06-29 03:02:37

标签: go web-scraping

我正在尝试使用gocolly的Parallelism设置来限制一次抓取最大数量的URL。

使用下面粘贴的代码,我得到以下输出:

Visiting https://www.google.com/search?q=GrkZmM
Visiting https://www.google.com/search?q=eYSGmF
Visiting https://www.google.com/search?q=MtYvWU
Visiting https://www.google.com/search?q=yMDfIa
Visiting https://www.google.com/search?q=sQuKLv
Done visiting https://www.google.com/search?q=MtYvWU
Done visiting https://www.google.com/search?q=GrkZmM
Done visiting https://www.google.com/search?q=eYSGmF
Done visiting https://www.google.com/search?q=yMDfIa
Done visiting https://www.google.com/search?q=sQuKLv

这表明访问没有因给定的最大线程数而阻塞。添加更多URL时,它们会一起发送,从而导致服务器禁止访问。

如何配置该库以获取以下输出:

Visiting https://www.google.com/search?q=GrkZmM
Visiting https://www.google.com/search?q=eYSGmF
Done visiting https://www.google.com/search?q=MtYvWU
Done visiting https://www.google.com/search?q=GrkZmM
Visiting https://www.google.com/search?q=MtYvWU
Visiting https://www.google.com/search?q=yMDfIa
Done visiting https://www.google.com/search?q=eYSGmF
Done visiting https://www.google.com/search?q=yMDfIa
Visiting https://www.google.com/search?q=sQuKLv
Done visiting https://www.google.com/search?q=sQuKLv

代码如下:

const (
    letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
    URL = "https://www.google.com/search?q="
)

func RandStringBytes(n int) chan string {
    out := make(chan string)
    quit := make(chan int)

    go func() { 
        for i := 1; i <= 5; i++ {
            b := make([]byte, n)
            for i := range b {
                b[i] = letterBytes[rand.Intn(len(letterBytes))]
            }
            out <- string(b)
        }
        close(out)
        quit <- 0
    }()
    return out
}

func main() {
    c := RandStringBytes(6) 
    collector := colly.NewCollector(
        colly.AllowedDomains("www.google.com"),
        colly.Async(true),
        colly.UserAgent("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36"),
    )   

    collector.Limit(&colly.LimitRule{
        DomainRegexp: "www.google.com",
        Parallelism: 2,
        RandomDelay: 5 * time.Second,
    })
    collector.OnResponse(func(r *colly.Response) {
        url := r.Ctx.Get("url")
        fmt.Println("Done visiting", url)
    })
    collector.OnRequest(func(r *colly.Request) {
        r.Ctx.Put("url", r.URL.String())
        fmt.Println("Visiting", r.URL.String())
    })
    collector.OnError(func(r *colly.Response, err error) {
        fmt.Println(err)
    })

    for w := range c {
        collector.Visit(URL+w)
    }

    collector.Wait()
}


Visiting https://www.google.com/search?q=GrkZmM
Visiting https://www.google.com/search?q=eYSGmF
Visiting https://www.google.com/search?q=MtYvWU
Visiting https://www.google.com/search?q=yMDfIa
Visiting https://www.google.com/search?q=sQuKLv
Done visiting https://www.google.com/search?q=MtYvWU
Done visiting https://www.google.com/search?q=GrkZmM
Done visiting https://www.google.com/search?q=eYSGmF
Done visiting https://www.google.com/search?q=yMDfIa
Done visiting https://www.google.com/search?q=sQuKLv

1 个答案:

答案 0 :(得分:1)

OnRequest在将请求实际发送到服务器之前完成。您的调试语句具有误导性:fmt.Println("Visiting", r.URL.String())应该是:fmt.Println("Preparing request for:", r.URL.String())

我认为您的问题很有趣,因此我使用python的http.server设置了一个本地测试用例,如下所示:

$ cd $(mktemp -d) # make temp dir
$ for n in {0..99}; do touch $n; done # make 100 empty files
$ python3 -m http.server # start up test server

然后在上面修改您的代码:

package main

import (
    "fmt"
    "strconv"
    "time"

    "github.com/gocolly/colly"
)

const URL = "http://127.0.0.1:8000/"

func main() {
    collector := colly.NewCollector(
        colly.AllowedDomains("127.0.0.1:8000"),
        colly.Async(true),
        colly.UserAgent("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36"),
    )

    collector.Limit(&colly.LimitRule{
        DomainRegexp:  "127.0.0.1:8000",
        Parallelism: 2,
        Delay:       5 * time.Second,
    })

    collector.OnResponse(func(r *colly.Response) {
        url := r.Ctx.Get("url")
        fmt.Println("Done visiting", url)
    })

    collector.OnRequest(func(r *colly.Request) {
        r.Ctx.Put("url", r.URL.String())
        fmt.Println("Creating request for:", r.URL.String())
    })

    collector.OnError(func(r *colly.Response, err error) {
        fmt.Println(err)
    })

    for i := 0; i < 100; i++ {
        collector.Visit(URL + strconv.Itoa(i))
    }

    collector.Wait()
}

请注意,我将RandomDelay更改为常规的,这使测试案例的推理变得更容易,并且我更改了OnRequest的调试语句。

现在,如果您go run此文件,您将看到:

  1. 它立即打印出Creating request for: http://127.0.0.1:8000/和一个数字,共100次
  2. 它会两次打印Done visiting http://127.0.0.1:8000/和一个数字
  3. Python HTTP服务器打印2个GET请求,其中#2中的每个数字1个。
  4. 它会暂停5秒
  5. 步骤2-#4重复其余数字

所以在我看来,柯利的行为符合预期。如果您仍然遇到意外的速率限制错误,请考虑尝试验证您的limit rule is matching the domain