如何通过Python程序下载完整的网页?

时间:2015-07-03 11:15:27

标签: python request beautifulsoup

目前我有一个程序只能下载给定页面的HTML。现在我想要一个可以下载网页所有文件的程序,包括HTML,CSS,JS和图像文件(与我们在任何网站上的ctrl-s相同)。

我目前的计划是:

import urllib
urllib.urlretrieve ("https://en.wikipedia.org/wiki/Python_%28programming_language%29", "t3.html")

我在Stack Overflow中访问了很多这样的问题,但它们都只是下载HTML文件。

4 个答案:

答案 0 :(得分:7)

以下实现使您可以获取子HTML网站。它可以更加开发,以获得您需要的其他文件。我为您设置了depth变量,以设置要解析的最大sub_websites。

import urllib2
from BeautifulSoup import *
from urlparse import urljoin


def crawl(pages, depth=None):
    indexed_url = [] # a list for the main and sub-HTML websites in the main website
    for i in range(depth):
        for page in pages:
            if page not in indexed_url:
                indexed_url.append(page)
                try:
                    c = urllib2.urlopen(page)
                except:
                    print "Could not open %s" % page
                    continue
                soup = BeautifulSoup(c.read())
                links = soup('a') #finding all the sub_links
                for link in links:
                    if 'href' in dict(link.attrs):
                        url = urljoin(page, link['href'])
                        if url.find("'") != -1:
                                continue
                        url = url.split('#')[0] 
                        if url[0:4] == 'http':
                                indexed_url.append(url)
        pages = indexed_url
    return indexed_url


pagelist=["https://en.wikipedia.org/wiki/Python_%28programming_language%29"]
urls = crawl(pagelist, depth=2)
print urls

答案 1 :(得分:2)

您可以使用简单的python库pywebcopy轻松地做到这一点。

  

对于当前版本:5.0.1


from pywebcopy import save_webpage

url = 'http://some-site.com/some-page.html'
download_folder = '/path/to/downloads/'    

kwargs = {'bypass_robots': True, 'project_name': 'recognisable-name'}

save_webpage(url, download_folder, **kwargs)

您的download_folder中将包含html,css,js。完全像原始网站一样工作。

答案 2 :(得分:1)

尝试使用Python库Scrapy。您可以通过下载其页面,扫描以及以下链接来对Scrapy进行编程以递归扫描网站:

  

用于从网站中提取所需数据的开源和协作框架。以快速,简单,可扩展的方式。

答案 3 :(得分:1)

使用Python 3+ Requests 和其他标准库。

函数savePage接收一个requests.Response和一个pagefilename的保存位置。

  • pagefilename。html 保存在当前文件夹中
  • 根据标签javascriptscssimages下载scriptlinkimg,并保存在文件夹 pagefilename _文件。
  • sys.stderr上打印任何异常,并返回BeautifulSoup对象。
  • 请求session必须是全局变量,除非有人在这里为我们编写了更简洁的代码。

您当然可以做得更好。


import os, sys
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup

def soupfindAllnSave(pagefolder, url, soup, tag2find='img', inner='src'):
    if not os.path.exists(pagefolder): # create only once
        os.mkdir(pagefolder)
    for res in soup.findAll(tag2find):   # images, css, etc..
        try:
            filename = os.path.basename(res[inner])  
            fileurl = urljoin(url, res.get(inner))
            # rename to saved file path
            # res[inner] # may or may not exist 
            filepath = os.path.join(pagefolder, filename)
            res[inner] = filepath
            if not os.path.isfile(filepath): # was not downloaded
                with open(filepath, 'wb') as file:
                    filebin = session.get(fileurl)
                    file.write(filebin.content)
        except Exception as exc:      
            print(exc, file=sys.stderr)
    return soup

def savePage(response, pagefilename='page'):    
   url = response.url
   soup = BeautifulSoup(response.text)
   pagefolder = pagefilename+'_files' # page contents 
   soup = soupfindAllnSave(pagefolder, url, soup, 'img', inner='src')
   soup = soupfindAllnSave(pagefolder, url, soup, 'link', inner='href')
   soup = soupfindAllnSave(pagefolder, url, soup, 'script', inner='src')    
   with open(pagefilename+'.html', 'w') as file:
      file.write(soup.prettify())
   return soup

示例保存google页面及其内容(google_files文件夹)

session = requests.Session()
#... whatever requests config you need here
response = session.get('https://www.google.com')
savePage(response, 'google')