我需要从网站上抓取内容(仅标题)。我只做了一页,但是我需要对网站上的所有页面都做。 目前,我正在执行以下操作:
import bs4, requests
import pandas as pd
import re
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
r = requests.get(website, headers=headers)
soup = bs4.BeautifulSoup(r.text, 'html')
title=soup.find_all('h2')
我知道,当我移至下一页时,URL如下更改:
website/page/2/
website/page/3/
...
website/page/49/
...
我试图使用next_page_url = base_url + next_page_partial构建一个递归函数,但是它不会移到下一页。
if soup.find("span", text=re.compile("Next")):
page = "https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina/page/".format(page_num)
page_num +=10 # I should scrape all the pages so maybe this number should be changed as I do not know at the beginning how many pages there are for that section
print(page_num)
else:
break
我关注了这个问题(和答案):Moving to next page for scraping using BeautifulSoup
如果您需要更多信息,请告诉我。非常感谢
更新的代码:
import bs4, requests
import pandas as pd
import re
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
page_num=1
website="https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina"
while True:
r = requests.get(website, headers=headers)
soup = bs4.BeautifulSoup(r.text, 'html')
title=soup.find_all('h2')
if soup.find("span", text=re.compile("Next")):
page = f"https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina/page/{page_num}".format(page_num)
page_num +=10
else:
break
答案 0 :(得分:0)
如果您使用f"url/{page_num}"
,请删除format(page_num)
。
您可以在下面使用任何您想要的东西:
page = f"https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina/page/{page_num}"
或
page = "https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina/page/{}".format(page_num)
祝你好运!
最终答案将是这样:
import bs4, requests
import pandas as pd
import re
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
page_num=1
website="https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina"
while True:
r = requests.get(website, headers=headers)
soup = bs4.BeautifulSoup(r.text, 'html')
title=soup.find_all('h2')
if soup.find("span", text=re.compile("Next")):
website = f"https://catania.liveuniversity.it/notizie-catania-cronaca/cronacacatenesesicilina/page/{page_num}"
page_num +=1
else:
break