使用beautifulsoup抓取动态加载页面

时间:2019-08-11 17:13:26

标签: python html web-scraping beautifulsoup screen-scraping

我是beautifulsoup包的新手。我正在尝试抓取https://indianrecipes.com/new_and_popular中的所有食物食谱和链接以及成分 问题是该网站向下滚动时只会加载更多食物。我提到了这个问题Beautifulsoup - scrape webpage - dynamically loading page,但并没有做太多。 我检查了检查元素中的网络选项卡,发现每次向下滚动时,都会发送XHR请求

api?tm=1565542062069
api?tm=1565542065302
api?tm=1565542073116
api?tm=1565542075617

是否可以在python中模拟这样的请求以从该页面提取所有食物食谱?

3 个答案:

答案 0 :(得分:1)

您必须使用硒将javascript从网页加载到html 然后使用硒的滚动代码

import requests
from bs4 import BeautifulSoup
from selenium import webdriver
import pandas as pd
import time
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome('/home/sush/Downloads/Compressed/chromedriver_linux64/chromedriver')

driver.get('https://indianrecipes.com/new_and_popular')


heights = []
counter = 0
for i in range(1,300):
    bg = driver.find_element_by_css_selector('body')
    time.sleep(0.1)
    bg.send_keys(Keys.END)
    heights.append(driver.execute_script("return document.body.scrollHeight"))
    try :
        bottom = heights[i-16]
    except:
        pass
    if i%16 ==0:
        new_bottom = heights[i-1]
        if bottom == new_bottom:
            break

,然后使用beautifusoup抓取所需的数据

soup = BeautifulSoup(driver.page_source, 'lxml')

答案 1 :(得分:1)

api?tm=1565542075617中的数字是纪元时间戳,以毫秒为单位。对于请求来说,这可能不是必需的。

重要的是要查看请求将向服务器发送响应的数据。在XHR请求中,向下滚动到Request Payload以查看有效负载。

下面是一个Python代码,该代码在最初recipes_per_page个食谱之后加载了offset个食谱。

import requests

offset = 50
recipes_per_page = 50
data = [{'jsonrpc': '2.0', 'method': 'recipe.get_trending', 'id': 1, 'params': [offset, recipes_per_page, None, False]}]
response = requests.post('https://indianrecipes.com/api', json=data)

recipes = response.json()[0]['result']['recipes']

答案 2 :(得分:1)

我做了一个简单的脚本,您可以在其中指定每页配方的数量和要刮取的页面数。它以JSON格式返回数据:

from itertools import count, islice
import requests
import json

url = 'https://indianrecipes.com/api'
data = {"id":1,"jsonrpc":"2.0","method":"recipe.get_trending","params":[50,50,None,False]}

per_page = 50
num_pages = 2

for i, c in enumerate( islice(count(0, per_page), 0, num_pages), 1):
    print('Page no.{} :'.format(i))
    print('-' * 80)
    data['params'][0] = c
    data['params'][1] = per_page
    json_data = requests.post(url, json=data).json()
    print(json.dumps(json_data, indent=4))
    print('-' * 80)

打印:

Page no.1 :
--------------------------------------------------------------------------------
{
    "id": 1,
    "jsonrpc": "2.0",
    "result": {
        "recipes": [
            {
                "has_video": false,
                "id": 8630002,
                "image_url": "//lh3.googleusercontent.com/zgZHuLeSg_lKRc66RycpaDoSVMULp3puzoignsoEH40DJBQtOpQi0Ub1L1ET52VFhd3ZUF8r8ZEiD_kEsZNQPloO3_T1KW9sbBE",
                "link": "//indianrecipes.com/recipe/Dahi-Vada_Ad3A",
                "name": "Dahi Vada",
                "rating": 5.0,
                "score": 0.0
            },
            {
                "has_video": false,
                "id": 9330018,
                "image_url": "//lh3.googleusercontent.com/HXd-CD3P0U_v4ItJplGsT5oKZ8mKAAA0AXRsgeOoeLeH4ggvyGRdx-6Y_J1H1EdRLv5De7b5oYqeHkBts4VwIpqBAHNA_OYP8g",
                "link": "//indianrecipes.com/recipe/French-Egg-Casserole_D9aa",
                "name": "French Egg Casserole",
                "rating": 0.0,
                "score": 0.0
            },

...and so on