Python BeautifulSoup-无法读取网站分页

时间:2018-11-16 14:59:12

标签: python web-scraping beautifulsoup

我尝试使用包含网站分页的div提取class='no-selected-number extreme-number',但是没有得到预期的结果。谁能帮我吗?

下面是我的代码:

import requests  from bs4 import BeautifulSoup 

URL ="https://www.falabella.com.pe/falabella-pe/category/cat40703/Perfumes-de-Mujer/"
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538 Safari/537.36'}

r = requests.get(URL, headers=headers, timeout=5)  html = r.content

soup = BeautifulSoup(html, 'lxml')  box_3 =
soup.find_all('div','fb-filters-sort') 
for div in box_3:
  last_page = div.find_all("div",{"class","no-selected-number extreme-number"})
  print(last_page)

1 个答案:

答案 0 :(得分:1)

您可能需要一种允许时间加载页面的方法,例如用硒。我认为requests不会提供您要查询的数据。

from selenium import webdriver
from selenium.webdriver.chrome.options import Options  

chrome_options = Options()  
chrome_options.add_argument("--headless")  
url ="https://www.falabella.com.pe/falabella-pe/category/cat40703/Perfumes-de-Mujer/"
d = webdriver.Chrome(chrome_options=chrome_options)
d.get(url)
print(d.find_element_by_css_selector('.content-items-number-list .no-selected-number.extreme-number:last-child').text)
d.quit()