如何从BeautifulSoup下载图像?

时间:2016-05-11 09:22:17

标签: python python-2.7 beautifulsoup scrape

图片http://i.imgur.com/OigSBjF.png

import requests from bs4 import BeautifulSoup

r = requests.get("xxxxxxxxx")
soup = BeautifulSoup(r.content)

for link in links:
    if "http" in link.get('src'):
       print link.get('src')

我收到了打印的网址,但不知道如何使用它。

2 个答案:

答案 0 :(得分:5)

您需要下载并写入磁盘:

import requests
from os.path  import basename

r = requests.get("xxx")
soup = BeautifulSoup(r.content)

for link in links:
    if "http" in link.get('src'):
        lnk = link.get('src')
        with open(basename(lnk), "wb") as f:
            f.write(requests.get(lnk).content)

您还可以使用选择过滤您的代码,以便只获取带有http链接的代码:

for link in soup.select("img[src^=http]"):
        lnk = link["src"]
        with open(basename(lnk)," wb") as f:
            f.write(requests.get(lnk).content)

答案 1 :(得分:1)

虽然其他答案完全正确。

我发现下载速度非常慢,并且不知道真正高分辨率图像的进展。

所以,做了这个。

from bs4 import BeautifulSoup
import requests
import subprocess

url = "https://example.site/page/with/images"
html = requests.get(url).text # get the html
soup = BeautifulSoup(html, "lxml") # give the html to soup

# get all the anchor links with the custom class 
# the element or the class name will change based on your case
imgs = soup.findAll("a", {"class": "envira-gallery-link"})
for img in imgs:
    imgUrl = img['href'] # get the href from the tag
    cmd = [ 'wget', texUrl ] # just download it using wget.
    subprocess.Popen(cmd) # run the command to download
    # if you don't want to run it parallel;
    # and wait for each image to download just add communicate
    subprocess.Popen(cmd).communicate()

警告:因为使用wget,它不会在win / mac上工作。

奖励:如果您不使用通信,则可以看到每个图像的进度。