蟒蛇。仅获取来自Google搜索结果的href链接内容

时间:2016-02-13 12:16:45

标签: python selenium hyperlink beautifulsoup

如何仅输出LINKS列表? 我尝试了其他解决方案同时使用beautifulsoup和selennium,但它们仍然给我一个非常类似于我目前得到的结果,这是链接和锚文本的href。我尝试使用urlparse作为一些较旧的答案建议,但似乎该模块不再使用,我对整个事情感到困惑。这是我的代码,目前输出链接和锚文本,这不是我想要的:

import requests, re
from bs4 import BeautifulSoup
headers = {'User-agent':'Mozilla/5.0'}
page = requests.get('https://www.google.com/search?q=Tesla',headers=headers)
soup = BeautifulSoup(page.content,'lxml')
global serpUrls
serpUrls = []
links = soup.findAll('a')
for link in soup.find_all("a",href=re.compile("(?<=/url\?q=)(htt.*://.*)")):
    #print(re.split(":(?=http)",link["href"].replace("/url?q=","")))
    serpUrls.append(link)

print(serpUrls[0:2])

xmasRegex = re.compile(r"""((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.‌​][a-z]{2,4}/)(?:[^\s()<>]+|(([^\s()<>]+|(([^\s()<>]+)))*))+(?:(([^\s()<>]+|(‌​([^\s()<>]+)))*)|[^\s`!()[]{};:'".,<>?«»“”‘’]))""", re.DOTALL)
mo = xmasRegex.findall('[<a href="/url?q=https://www.teslamotors.com/&amp;sa=U&amp;ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQFggUMAA&amp;usg=AFQjCNG1nvN_Z0knKTtEah3whTIObUAhcg"><b>Tesla</b> Motors | Premium Electric Vehicles</a>, <a class="_Zkb" href="/url?q=http://webcache.googleusercontent.com/search%3Fq%3Dcache:rzPQodkDKYYJ:https://www.teslamotors.com/%252BTesla%26gws_rd%3Dcr%26hl%3Des%26%26ct%3Dclnk&amp;sa=U&amp;ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQIAgXMAA&amp;usg=AFQjCNEZ40VWO_fFDjXH09GakUOgODNlHg">En caché</a>]')
print(mo)

我只想要&#34; http://urloflink.com&#34;而不是整行代码。有什么办法吗?谢谢!

输出如下:

[<a href="/url?q=https://www.teslamotors.com/&amp;sa=U&amp;ved=0ahUKEwjI39vl2_TKAhXFWxoKHRX-CFgQFggUMAA&amp;usg=AFQjCNG1nvN_Z0knKTtEah3whTIObUAhcg"><b>Tesla</b> Motors | Premium Electric Vehicles</a>, <a class="_Zkb" href="/url?q=http://webcache.googleusercontent.com/search%3Fq%3Dcache:rzPQodkDKYYJ:https://www.teslamotors.com/%252BTesla%26gws_rd%3Dcr%26hl%3Des%26%26ct%3Dclnk&amp;sa=U&amp;ved=0ahUKEwjI39vl2_TKAhXFWxoKHRX-CFgQIAgXMAA&amp;usg=AFQjCNEZ40VWO_fFDjXH09GakUOgODNlHg">En caché</a>]
[('https://www.teslamotors.com/&amp;sa=U&amp;ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQFggUMAA&amp;usg=AFQjCNG1nvN_Z0knKTtEah3whTIObUAhcg"', '', '', '', '', '', '', '', ''), ('http://webcache.googleusercontent.com/search%3Fq%3Dcache:rzPQodkDKYYJ:https://www.teslamotors.com/%252BTesla%26gws_rd%3Dcr%26hl%3Des%26%26ct%3Dclnk&amp;sa=U&amp;ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQIAgXMAA&amp;usg=AFQjCNEZ40VWO_fFDjXH09GakUOgODNlHg"', '', '', '', '', '', '', '', '')]

1 个答案:

答案 0 :(得分:0)

永远不要使用正则表达式来解析HTML。

如果你正确地执行了findall,你应该能够访问每个结果的href属性。