阅读美丽汤的链接列表

时间:2019-04-21 17:42:29

标签: python web-scraping beautifulsoup

我一直在尝试从成功提取的URL列表中读取链接。我的问题是,当我尝试阅读整个列表时得到TypeError Traceback (most recent call last)。但是,当我阅读单个链接时,urlopen(urls).read()行会毫无问题地执行。

response = requests.get('some_website')
doc = BeautifulSoup(response.text, 'html.parser')
headlines = doc.find_all('h3')

links = doc.find_all('a', { 'rel':'bookmark' })
for link in links:
    print(link['href'])

for urls in links:
    raw_html = urlopen(urls).read()  <----- this row here
    articles = BeautifulSoup(raw_html, "html.parser")

1 个答案:

答案 0 :(得分:0)

考虑将BeautifulSouprequests.Session()结合使用,以提高重用连接和添加标头的效率

import requests
from bs4 import BeautifulSoup

with requests.Session() as s:

    url = 'https://newspunch.com/category/news/us/'
    headers = {'User-Agent': 'Mozilla/5'}
    r = s.get(url, headers = headers)
    soup = BeautifulSoup(r.text, 'lxml')
    links = [item['href'] for item in soup.select('[rel=bookmark]')]

    for link in links:
        r = s.get(link)
        soup = BeautifulSoup(r.text, 'lxml')
        print(soup.select_one('.entry-title').text)