无法使用美丽的Sou从Python网页中提取图像

时间:2019-05-10 10:30:03

标签: python web-scraping beautifulsoup urllib

我正在尝试从URL下面提取所有图像,但是,我不理解HTTP错误403: Forbidden,在错误处理过程中是否可以解决它,或者只是由于以下原因而无法删除URL:局限性?

from bs4 import BeautifulSoup
from urllib.request import urlopen
import urllib.request


def make_soup(url):
    html = urlopen(url).read()
    return BeautifulSoup(html)

def get_images(url):
    soup = make_soup(url)
    #this makes a list of bs4 element tags
    images = [img for img in soup.findAll('img')]
    print (str(len(images)) + "images found.")
    print("downloading to current directory ")

    #compile our unicode list of image links
    image_links = [each.get('src') for each in images]
    for each in image_links:
        filename=each.split('/')[-1]
        urllib.request.urlretrieve(each,filename)
    return image_links

get_images("https://opensignal.com/reports/2019/04/uk/mobile-network-experience")

2 个答案:

答案 0 :(得分:0)

某些网站需要您指定User-Agent标头

from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import urllib.request


def make_soup(url):
    site = url
    hdr = {'User-Agent': 'Mozilla/5.0'}
    req = Request(site, headers=hdr)
    page = urlopen(req)
    return BeautifulSoup(page)

答案 1 :(得分:0)

您可以使用此功能进行图像抓取。现在使用img标签不再有用。我们可以实现如下所示的方法,可以满足要求。它不会在任何标签上进行中继,因此,只要存在图像链接,它就会抓住它。

def extract_ImageUrl(soup_chunk):
    urls_found = []
    for tags in soup_chunk.find_all():
        attributes = tags.attrs
        if str(attributes).__contains__('http'):
            for links in attributes.values():
                if re.match('http.*\.jpg|png',str(links)):
                    if len(str(links).split()) <=1:
                        urls_found.append(links)
                    else:
                        link = [i.strip() for i in str(links).split() if re.match('http.*\.jpg|png',str(i))]
                        urls_found = urls_found + link
    print("Found {} image links".format(len(urls_found)))
    return urls_found

这是一个初步的想法,需要进行更新才能使其变得更好。