如何从动态网站下载和保存所有PDF?

时间:2019-06-04 11:10:21

标签: python pdf web-scraping href

我正在尝试下载某些网页中包含动态元素的所有PDF并将其保存在文件夹中,例如:https://www.bankinter.com/banca/nav/documentos-datos-fundamentales

此网址中的每个PDF都有相似的href。这里是其中两个: “ https://bancaonline.bankinter.com/publico/DocumentacionPrixGet?doc=workspace://SpacesStore/fb029023-dd29-47d5-8927-31021d834757;1.0&nameDoc=ISIN_ES0213679FW7_41-Bonos_EstructuradosGarantizad_19.16_es.pdf

https://bancaonline.bankinter.com/publico/DocumentacionPrixGet?doc=workspace://SpacesStore/852a7524-f21c-45e8-a8d9-1a75ce0f8286;1.1&nameDoc=20-Dep.Estruc.Cont.Financieros_18.1_es.pdf

这是我为另一个网站所做的工作,此代码可以按需工作:

link = 'https://www.bankia.es/estaticos/documentosPRIIPS/json/jsonSimple.txt'
base = 'https://www.bankia.es/estaticos/documentosPRIIPS/{}'

dirf = os.environ['USERPROFILE'] + "\Documents\TFM\PdfFolder"
if not os.path.exists(dirf2):os.makedirs(dirf2)
os.chdir(dirf2)

res = requests.get(link,headers={"User-Agent":"Mozilla/5.0"})
for item in res.json():
    if not 'nombre_de_fichero' in item: continue
    link = base.format(item['nombre_de_fichero'])
    filename_bankia = item['nombre_de_fichero'].split('.')[-2] + ".PDF"
    with open(filename_bankia, 'wb') as f:
        f.write(requests.get(link).content)

1 个答案:

答案 0 :(得分:0)

您必须使用适当的json参数发出发布http请求。收到响应后,您必须解析两个字段objectIdnombreFichero,以使用它们来建立指向pdf的正确链接。以下应该起作用:

import os
import json
import requests

url = 'https://bancaonline.bankinter.com/publico/rs/documentacionPrix/list'
base = 'https://bancaonline.bankinter.com/publico/DocumentacionPrixGet?doc={}&nameDoc={}'
payload = {"cod_categoria": 2,"cod_familia": 3,"divisaDestino": None,"vencimiento": None,"edadActuarial": None}

dirf = os.environ['USERPROFILE'] + "\Desktop\PdfFolder"
if not os.path.exists(dirf):os.makedirs(dirf)
os.chdir(dirf)

r = requests.post(url,json=payload)
for item in r.json():
    objectId = item['objectId']
    nombreFichero = item['nombreFichero'].replace(" ","_")
    filename = nombreFichero.split('.')[-2] + ".PDF"
    link = base.format(objectId,nombreFichero)
    with open(filename, 'wb') as f:
        f.write(requests.get(link).content)

执行上述脚本后,请稍等一会,因为该网站确实运行缓慢。