使用BeautifulSoup将所有href写入列表

时间:2018-06-05 18:35:36

标签: python list web-scraping beautifulsoup

我想从this page抓取链接并将其列入列表。

我有这段代码:

import bs4 as bs
import urllib.request

source = urllib.request.urlopen('http://www.gcoins.net/en/catalog/236').read()
soup = bs.BeautifulSoup(source,'lxml')

links = soup.find_all('a', attrs={'class': 'view'})
print(links)

它产生以下输出:

[<a class="view" href="/en/catalog/view/514">
<img alt="View details" height="32" src="/img/actions/file.png" title="View details" width="32"/>
</a>, 

     """There are 28 lines more"""

      <a class="view" href="/en/catalog/view/565">
<img alt="View details" height="32" src="/img/actions/file.png" title="View details" width="32"/>
</a>]

我需要关注:[/en/catalog/view/514, ... , '/en/catalog/view/565']

然后我继续添加以下内容:href_value = links.get('href')我收到了错误消息。

3 个答案:

答案 0 :(得分:1)

尝试:

soup = bs.BeautifulSoup(source,'lxml')

links = [i.get("href") for i in soup.find_all('a', attrs={'class': 'view'})]
print(links)

<强>输出:

['/en/catalog/view/514', '/en/catalog/view/515', '/en/catalog/view/179080', '/en/catalog/view/45518', '/en/catalog/view/521', '/en/catalog/view/111429', '/en/catalog/view/522', '/en/catalog/view/182223', '/en/catalog/view/168153', '/en/catalog/view/523', '/en/catalog/view/524', '/en/catalog/view/60228', '/en/catalog/view/525', '/en/catalog/view/539', '/en/catalog/view/540', '/en/catalog/view/31642', '/en/catalog/view/553', '/en/catalog/view/558', '/en/catalog/view/559', '/en/catalog/view/77672', '/en/catalog/view/560', '/en/catalog/view/55377', '/en/catalog/view/55379', '/en/catalog/view/32001', '/en/catalog/view/561', '/en/catalog/view/562', '/en/catalog/view/72185', '/en/catalog/view/563', '/en/catalog/view/564', '/en/catalog/view/565']

答案 1 :(得分:1)

您的links目前是一个python列表。你想要做的是循环到该列表并获取如下的hrefs。

final_hrefs = []
for each_link in links:
    final_hrefs.append(each_link.a['href'])

或单行

final_hrefs = [each_link['href'] for each_link in links]

print(final_hrefs)

答案 2 :(得分:0)

试试下面的代码。一步即可获得 HTML 列表:

import bs4 as bs
import urllib.request

source = urllib.request.urlopen('http://www.gcoins.net/en/catalog/236').read()
soup = bs.BeautifulSoup(source,'lxml')

links = [i.get("href") for i in soup.find_all('a', attrs={'class': 'view'})]
for link in links:
    print('http://www.gcoins.net'+ link)