使用BeautifulSoup

时间:2017-02-09 09:40:36

标签: python beautifulsoup

我使用BeautifulSoup提取适用于普通页面的图片。 现在我想从像这样的网页中提取Chromebook的图片

https://twitter.com/banprada/statuses/829102430017187841

该页面显然包含指向包含该图像的其他页面的链接。这是我从上述链接下载图像的代码,但我只是获取发布链接的人的图像。

import urllib.request
import os
from bs4 import BeautifulSoup

URL = "http://twitter.com/banprada/statuses/829102430017187841"
list_dir="D:\\"
default_dir = os.path.join(list_dir,"Pictures_neu")
opener = urllib.request.build_opener()
urllib.request.install_opener(opener)
soup = BeautifulSoup(urllib.request.urlopen(URL).read())
imgs = soup.findAll("img",{"alt":True, "src":True})
for img in imgs:
   img_url = img["src"]
   filename = os.path.join(default_dir, img_url.split("/")[-1])
   img_data = opener.open(img_url)
   f = open(filename,"wb")
   f.write(img_data.read())
   f.close()

是否有机会以某种方式下载图片?

非常感谢和问候, 岸堤

1 个答案:

答案 0 :(得分:1)

这是使用Selenium + requests

获取仅提及图片的方法
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import requests

link = 'https://twitter.com/banprada/statuses/829102430017187841'
driver = webdriver.PhantomJS()
driver.get(link)
wait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, "//iframe[starts-with(@id, 'xdm_default')]")))
image_src = driver.find_element_by_tag_name('img').get_attribute('src')
response = requests.get(image_src).content
with open('C:\\Users\\You\\Desktop\\Image.jpeg', 'wb') as f:
    f.write(response)

如果您希望所有来自页面上所有 iframe的图片(不包括您可以使用代码获得的初始页面源上的图片):

from selenium import webdriver
from selenium.common.exceptions import WebDriverException
import requests
import time

link = 'https://twitter.com/banprada/statuses/829102430017187841'
driver = webdriver.Chrome()
driver.get(link)
time.sleep(5) # To wait until all iframes completely rendered. Might be increased
iframe_counter = 0
while True:
    try:
        driver.switch_to_frame(iframe_counter)
        pictures = driver.find_elements_by_xpath('//img[@src and @alt]')
        if len(pictures) > 0:
            for pic in pictures:
                response = requests.get(pic.get_attribute('src')).content
                with open('C:\\Users\\You\\Desktop\\Images\\%s.jpeg' % (str(iframe_counter) + str(pictures.index(pic))), 'wb') as f:
                    f.write(response)
        driver.switch_to_default_content()
        iframe_counter += 1
    except WebDriverException:
        break

请注意,您可以使用any webdriver