我正在使用以下代码段从谷歌搜索结果中获取我提供的“关键字”的链接。
import mechanize
from bs4 import BeautifulSoup
import re
def googlesearch():
br = mechanize.Browser()
br.set_handle_robots(False)
br.set_handle_equiv(False)
br.addheaders = [('User-agent', 'Mozilla/5.0')]
br.open('http://www.google.com/')
# do the query
br.select_form(name='f')
br.form['q'] = 'scrapy' # query
data = br.submit()
soup = BeautifulSoup(data.read())
for a in soup.find_all('a', href=True):
print "Found the URL:", a['href']
googlesearch()
因为我正在解析搜索结果HTML页面以获取链接。它获取所有'a'标签。但我需要的是只获取结果的链接。另一件事是当你看到href属性的输出它给出了类似的东西
找到网址: /查询Q = scrapy&安培; HL = EN-IN&安培; GBV = 1&安培; PRMD = IVNS&安培;源= LNT&安培; TBS =李:1和; SA = X&安培; EI = DT8HU9SlG8bskgWvqIHQAQ&安培; VED = 0CBgQpwUoAQ
但是href attitube中的实际链接是http://scrapy.org/
有人能指出上面提到的上述两个问题的解决方案吗?
提前致谢
答案 0 :(得分:4)
您感兴趣的链接位于h3
标记内(r
类):
<li class="g">
<h3 class="r">
<a href="/url?q=http://scrapy.org/&sa=U&ei=XdIUU8DOHo-ElAXuvIHQDQ&ved=0CBwQFjAA&usg=AFQjCNHVtUrLoWJ8XWAROG-a4G8npQWXfQ">
<b>Scrapy</b> | An open source web scraping framework for Python
</a>
</h3>
..
您可以使用css selector找到链接:
soup.select('.r a')
网址采用以下格式:
/url?q=http://scrapy.org/&sa=U&ei=s9YUU9TZH8zTkQWps4BY&ved=0CBwQFjAA&usg=AFQjCNE-2uiVSl60B9cirnlWz2TMv8KMyQ
^^^^^^^^^^^^^^^^^^^^
实际网址位于q
参数中。
要获取整个查询字符串,请使用urlparse.urlparse
:
>>> url = '/url?q=http://scrapy.org/&sa=U&ei=s9YUU9TZH8zTkQWps4BY&ved=0CBwQFjAA&usg=AFQjCNE-2uiVSl60B9cirnlWz2TMv8KMyQ'
>>> urlparse.urlparse(url).query
'q=http://scrapy.org/&sa=U&ei=s9YUU9TZH8zTkQWps4BY&ved=0CBwQFjAA&usg=AFQjCNE-2uiVSl60B9cirnlWz2TMv8KMyQ'
然后,使用urlparse.parse_qs
解析查询字符串并提取q
参数值:
>>> urlparse.parse_qs(urlparse.urlparse(url).query)['q']
['http://scrapy.org/']
>>> urlparse.parse_qs(urlparse.urlparse(url).query)['q'][0]
'http://scrapy.org/'
for a in soup.select('.r a'):
print urlparse.parse_qs(urlparse.urlparse(a['href']).query)['q'][0]
输出:
http://scrapy.org/
http://doc.scrapy.org/en/latest/intro/tutorial.html
http://doc.scrapy.org/
http://scrapy.org/download/
http://doc.scrapy.org/en/latest/intro/overview.html
http://scrapy.org/doc/
http://scrapy.org/companies/
https://github.com/scrapy/scrapy
http://en.wikipedia.org/wiki/Scrapy
http://www.youtube.com/watch?v=1EFnX1UkXVU
https://pypi.python.org/pypi/Scrapy
http://pypix.com/python/build-website-crawler-based-upon-scrapy/
http://scrapinghub.com/scrapy-cloud
答案 1 :(得分:0)
或者你可以使用基本上做同样事情的https://code.google.com/p/pygoogle/。
您也可以获得结果链接。
“stackoverflow”的示例查询的输出片段:
*Found 3940000 results*
[Stack Overflow]
Stack Overflow is a question and answer site for professional and enthusiast
programmers. It's 100% free, no registration required. Take the 2-minute tour
http://stackoverflow.com/