网页抓取网页抓取带有网址的pdf文档,而该网址在Python上已更改

时间:2019-10-03 07:31:39

标签: python web-scraping beautifulsoup web-crawler scrapinghub

import os
import requests
from bs4 import BeautifulSoup


desktop = os.path.expanduser("~/Desktop")

url = 'https://www.ici.org/research/stats'

response = requests.get(url)

soup = BeautifulSoup(response.text, 'html.parser')
excel_files = soup.select('a[href*=xls]')

for each in excel_files:
    if 'Supplement: Worldwide Public Tables' in each.text:
        link = 'https://www.ici.org' + each['href']

        filename = each['href'].split('/')[-1]

        if os.path.isfile(desktop + '/' + filename):
            print ('*** File already exists: %s ***' %filename)
            continue

        resp = requests.get(link)
        output = open(desktop + '/' + filename, 'wb')
        output.write(resp.content)
        output.close()
        print ('Saved: %s' %filename)

我是Web抓取的新手,我想从网站列表中自动下载pdf文档。

该文档每月更新一次,网址在网站上更改。 例如https://fundcentres.lgim.com/fund-centre/OEIC/Sterling-Liquidity-Fund 我想从上述网站下载“概况介绍” pdf文档。 我认为理想的方法是按事实表并将其保存到驱动器上某个位置的代码。困难在于网址会更改!

0 个答案:

没有答案
相关问题