BeautifulSoup find_all()不返回任何内容[]

时间:2019-06-05 06:29:33

标签: python web-scraping beautifulsoup

我正在尝试抓取所有优惠中的this page,并且想要遍历<p class="white-strip">,但是page_soup.find_all("p", "white-strip")返回一个空列表[]。

到目前为止,我的代码-

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

my_url = 'https://www.sbicard.com/en/personal/offers.page#all-offers'

# Opening up connection, grabbing the page
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

# html parsing
page_soup = soup(page_html, "lxml")

编辑:我使用Selenium使其正常运行,下面是我使用的代码。但是,我无法找出另一种可以完成此操作的方法。

from bs4 import BeautifulSoup
from selenium import webdriver

driver = webdriver.Chrome("C:\chromedriver_win32\chromedriver.exe")
driver.get('https://www.sbicard.com/en/personal/offers.page#all-offers')

# html parsing
page_soup = BeautifulSoup(driver.page_source, 'lxml')

# grabs each offer
containers = page_soup.find_all("p", {'class':"white-strip"})

filename = "offers.csv"
f = open(filename, "w")

header = "offer-list\n"

f.write(header)

for container in containers:
    offer = container.span.text
    f.write(offer + "\n")

f.close()
driver.close()

2 个答案:

答案 0 :(得分:1)

website是动态渲染请求数据。 您应该尝试自动化硒库。它允许您剪贴动态呈现请求(js或ajax)页面数据。

from bs4 import BeautifulSoup
from selenium import webdriver

driver = webdriver.Chrome("/usr/bin/chromedriver")
driver.get('https://www.sbicard.com/en/personal/offers.page#all-offers')

page_soup = BeautifulSoup(driver.page_source, 'lxml')
p_list = page_soup.find_all("p", {'class':"white-strip"})

print(p_list)

其中'/usr/bin/chromedriver'硒Web驱动程序路径。

下载适用于Chrome浏览器的Selenium Web驱动程序:

http://chromedriver.chromium.org/downloads

为Chrome浏览器安装Web驱动程序:

https://christopher.su/2015/selenium-chromedriver-ubuntu/

硒教程:

https://selenium-python.readthedocs.io/

答案 1 :(得分:1)

如果您要查找任何一项,则可以在包含var offerData的脚本标记中找到它们。要从该脚本中获得所需的内容,可以尝试以下操作。

import re
import json
import requests

url = "https://www.sbicard.com/en/personal/offers.page#all-offers"

res = requests.get(url)
p = re.compile(r"var offerData=(.*?);",re.DOTALL)
script = p.findall(res.text)[0].strip()
items = json.loads(script)
for item in items['offers']['offer']:
    print(item['text'])

输出类似于:

Upto Rs 8000 off on flights at Yatra
Electricity Bill payment – Phonepe Offer
25% off on online food ordering
Get 5% cashback at Best Price stores
Get 5% cashback