我正试图从“纽约时报”获取完整的书评,以便对他们进行情绪分析。我知道NY Times API并且正在使用它来获取图书评论网址,但我需要设计一个刮刀来获取完整的文章文字,因为API只提供了一个代码段。我相信nytimes.com有机器人保护,以防止机器人刮网站,但我知道有办法绕过它。
我发现这个python scraper有效,可以从nytimes.com中提取全文,但我更愿意在Go中实现我的解决方案。我应该把它移植到Go还是这个解决方案不必要地复杂?我已经玩过改变User-Agent标题,但我在Go中所做的一切都以无限重定向循环错误结束。
代码:
package main
import (
//"fmt"
"io/ioutil"
"log"
"math/rand"
"net/http"
"time"
//"net/url"
)
func main() {
rand.Seed(time.Now().Unix())
userAgents := [5]string{
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0",
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:41.0) Gecko/20100101 Firefox/41.0",
}
url := "http://www.nytimes.com/2015/10/25/books/review/the-tsar-of-love-and-techno-by-anthony-marra.html"
client := &http.Client{}
req, err := http.NewRequest("GET", url, nil)
if err != nil {
log.Fatalln(err)
}
req.Header.Set("User-Agent", userAgents[rand.Intn(len(userAgents))])
resp, err := client.Do(req)
if err != nil {
log.Fatalln(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
log.Println(string(body))
}
结果:
2016/12/05 21:57:53 Get http://www.nytimes.com/2015/10/25/books/review/the-tsar-of-love-and-techno-by-anthony-marra.html?_r=4: stopped after 10 redirects
exit status 1
任何帮助表示赞赏!谢谢!
答案 0 :(得分:0)
您只需向您的客户添加Cookie:
var cookieJar, _ = cookiejar.New(nil)
var client = &http.Client{Jar: cookieJar}
resp, err := client.Do(req)
if err != nil {
log.Fatalln(err)
}
// now response contains all you need and
// you can show it on the console or save to file