我是使用python进行网页抓取的新手,所以我不知道我是否正确这样做。
我正在使用一个调用BeautifulSoup的脚本来解析谷歌搜索前10页的网址。经过stackoverflow.com测试,开箱即用。我在另一个网站上测试了几次,试图查看该脚本是否真的与更高的谷歌页面请求一起工作,然后它对我说了503。我切换到另一个URL来测试并为一些低页面请求工作,然后也是503'd。现在我传递给它的每个URL都是503'。有什么建议吗?
import sys # Used to add the BeautifulSoup folder the import path
import urllib2 # Used to read the html document
if __name__ == "__main__":
### Import Beautiful Soup
### Here, I have the BeautifulSoup folder in the level of this Python script
### So I need to tell Python where to look.
sys.path.append("./BeautifulSoup")
from BeautifulSoup import BeautifulSoup
### Create opener with Google-friendly user agent
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
### Open page & generate soup
### the "start" variable will be used to iterate through 10 pages.
for start in range(0,10):
url = "http://www.google.com/search?q=site:stackoverflow.com&start=" + str(start*10)
page = opener.open(url)
soup = BeautifulSoup(page)
### Parse and find
### Looks like google contains URLs in <cite> tags.
### So for each cite tag on each page (10), print its contents (url)
for cite in soup.findAll('cite'):
print cite.text
答案 0 :(得分:5)
Google服务条款不允许自动查询。 有关信息,请参阅此文章: Unusual traffic from your computer 还有Google Terms of service
答案 1 :(得分:0)
正如Ettore所说,抓取搜索结果是违反我们的ToS的。但是请查看WebSearch api,特别是documentation的底部部分,它应该为您提供有关如何从非javascipt环境访问API的提示。