抓取网页,直到“下一个”页面被禁用

时间:2019-03-12 20:59:19

标签: python web-scraping beautifulsoup pagination

url = 'https://www.tripadvisor.ie/Attraction_Review-g295424-d2038312-Reviews-Global_Village-Dubai_Emirate_of_Dubai.html'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
def get_links():
  review_links = []
  for review_link in soup.find_all('a', {'class':'title'},href=True):
      review_link = review_link['href']
      review_links.append(review_link)
  return review_links
link = 'https://www.tripadvisor.ie'
review_urls = []
for i in get_links():
   review_url = link + i
   print (review_url)
review_urls.append(review_url)

这里代码保存了此网页上存在的所有超链接-但我想将页面上的所有超链接刮到319。禁用分页时无法实现

1 个答案:

答案 0 :(得分:0)

有一个参数,您可以在URL中进行更改以循环并获取所有评论。 所以我只是添加了一个循环并请求所有网址

def get_page(index):
    url = "https://www.tripadvisor.ie/Attraction_Review-g295424-d2038312-Reviews-or{}-Global_Village-Dubai_Emirate_of_Dubai.html".format(str(index))
    html = requests.get(url)
    page = soup(html.text, 'html.parser')
    return page

nb_review = 3187
for i in range(0, nb_review, 10):
    page = get_page(i)

使用您的代码段的完整代码是:

from bs4 import BeautifulSoup as soup
import requests

def get_page(index):
    url = "https://www.tripadvisor.ie/Attraction_Review-g295424-d2038312-Reviews-or{}-Global_Village-Dubai_Emirate_of_Dubai.html".format(str(index))
    html = requests.get(url)
    page = soup(html.text, 'html.parser')
    return page

def get_links(page):
  review_links = []
  for review_link in page.find_all('a', {'class':'title'},href=True):
      review_link = review_link['href']
      review_links.append(review_link)
  return review_links

link = 'https://www.tripadvisor.ie'
review_urls = []
nb_review = 3187
for i in range(0, nb_review, 10):
    page = get_page(i)
    for i in get_links(page):
        review_url = link + i
        review_urls.append(review_url)
print(len(review_urls))

输出:

3187

编辑:

您显然可以抓取首页并获得评论号,以升级代码以使其更具可定制性