为什么可以下载谷歌搜索页面而不是谷歌学术搜索页面?

时间:2012-07-14 13:42:17

标签: python python-3.x urllib google-scholar

我正在使用 Python 3.2.3的 urllib.request模块来下载Google搜索结果,但我发现了一个奇怪的错误urlopen与Google的链接有效搜索结果,但不包括Google学术搜索。在此示例中,我正在搜索"JOHN SMITH"。此代码成功打印HTML:

from urllib.request import urlopen, Request
from urllib.error import URLError

# Google
try:
    page_google = '''http://www.google.com/#hl=en&sclient=psy-ab&q=%22JOHN+SMITH%22&oq=%22JOHN+SMITH%22&gs_l=hp.3..0l4.129.2348.0.2492.12.10.0.0.0.0.154.890.6j3.9.0...0.0...1c.gjDBcVcGXaw&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&fp=dffb3b4a4179ca7c&biw=1366&bih=649'''
    req_google = Request(page_google)
    req_google.add_header('User Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1')
    html_google = urlopen(req_google).read()
    print(html_google[0:10])
except URLError as e:
    print(e)

但是此代码对Google学术搜索引发了同样的操作,引发了URLError例外:

from urllib.request import urlopen, Request
from urllib.error import URLError

# Google Scholar
try:
    page_scholar = '''http://scholar.google.com/scholar?hl=en&q=%22JOHN+SMITH%22&btnG=&as_sdt=1%2C14'''
    req_scholar = Request(page_scholar)
    req_scholar.add_header('User Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1')
    html_scholar = urlopen(req_scholar).read()
    print(html_scholar[0:10])
except URLError as e:
    print(e)

回溯:

Traceback (most recent call last):
  File "/home/ak5791/Desktop/code-sandbox/scholar/crawler.py", line 6, in <module>
    html = urlopen(page).read()
  File "/usr/lib/python3.2/urllib/request.py", line 138, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.2/urllib/request.py", line 369, in open
    response = self._open(req, data)
  File "/usr/lib/python3.2/urllib/request.py", line 387, in _open
    '_open', req)
  File "/usr/lib/python3.2/urllib/request.py", line 347, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.2/urllib/request.py", line 1155, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/usr/lib/python3.2/urllib/request.py", line 1138, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -5] No address associated with hostname>

我通过在Chrome中搜索并从那里复制链接来获取这些链接。一位评论者报告了403错误,我有时也会这样做。我认为这是因为Google不支持抓取学者资格。但是,更改用户代理字符串不会解决此或原始问题,因为我大多数情况下都会URLErrors

1 个答案:

答案 0 :(得分:3)

This PHP script似乎表明您需要在Google为您提供结果之前设置一些Cookie:

/*

 Need a cookie file (scholar_cookie.txt) like this:

# Netscape HTTP Cookie File
# http://curlm.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.

.scholar.google.com     TRUE    /       FALSE   2147483647      GSP     ID=353e8f974d766dcd:CF=2
.google.com     TRUE    /       FALSE   1317124758      PREF    ID=353e8f974d766dcd:TM=1254052758:LM=1254052758:S=_biVh02e4scrJT1H
.scholar.google.co.uk   TRUE    /       FALSE   2147483647      GSP     ID=f3f18b3b5a7c2647:CF=2
.google.co.uk   TRUE    /       FALSE   1317125123      PREF    ID=f3f18b3b5a7c2647:TM=1254053123:LM=1254053123:S=UqjRcTObh7_sARkN

*/

Python recipe for Google Scholar comment证实了这一点,其中包含Google检测脚本的警告,如果您使用过多,则会禁用您。