我有以下代码从网络链接下载所有图像。
from BeautifulSoup import BeautifulSoup as bs
import urlparse
from urllib2 import urlopen
from urllib import urlretrieve
import os
import sys
def main(url, out_folder="/test/"):
"""Downloads all the images at 'url' to /test/"""
soup = bs(urlopen(url))
parsed = list(urlparse.urlparse(url))
for image in soup.findAll("img"):
print "Image: %(src)s" % image
filename = image["src"].split("/")[-1]
parsed[2] = image["src"]
outpath = os.path.join(out_folder, filename)
if image["src"].lower().startswith("http"):
urlretrieve(image["src"], outpath)
else:
urlretrieve(urlparse.urlunparse(parsed), outpath)
def _usage():
print "usage: python dumpimages.py http://example.com [outpath]"
if __name__ == "__main__":
url = sys.argv[-1]
out_folder = "/test/"
if not url.lower().startswith("http"):
out_folder = sys.argv[-1]
url = sys.argv[-2]
if not url.lower().startswith("http"):
_usage()
sys.exit(-1)
main(url, out_folder)
我想修改它,以便只下载名为'phd210223.gif'的图像(例如),即满足条件的图像:'phd * .gif'
我希望将其置于一个循环中,以便在从一个网页获取此类图像后,它会将页面ID增加1并从下一页下载相同内容:“http://www.example.com/phd.php?id=2”
我该怎么做?
答案 0 :(得分:1)
正则表达式可以帮助解决这个问题!当在string / url中找到pattern时,将返回匹配对象,否则返回None。
import re
reg = re.compile('phd.*\.gif$')
str1 = 'path/phd12342343.gif'
str2 = 'path/dhp12424353153.gif'
print re.search(reg,str1)
print re.search(reg,str2)
答案 1 :(得分:1)
您可以使用BeautifulSoup
' s built-in support for regular expressions,而不是检查循环中的名称。将编译的正则表达式提供为src
参数的值:
import re
from bs4 import BeautifulSoup as bs # note, you should use beautifulsoup4
for image in soup.find_all("img", src=re.compile('phd\d+\.gif$')):
...
phd\d+\.gif$
正则表达式将搜索以phd
开头的文本,后跟一个或多个数字,后跟点,后跟字符串末尾的gif
。
请注意,您使用的是过时且无法维护的BeautifulSoup3
,请切换为beautifulsoup4
:
pip install beautifulsoup4
答案 2 :(得分:0)
我个人更喜欢使用python默认工具,所以我使用html.parser,你需要的东西是这样的:
import re, urllib.request, html.parser
class LinksHTMLParser(parse.HTMLParser):
def __init__(self, length):
super().__init__()
self.gifs = list()
def handle_starttag(self, tag, attrs):
if tag == "a":
for name, value in attrs:
if name == "href":
gifName = re.split("/", value)[-1]
if *gifNameCondition*:
self.gifs.append(value)
parser = LinksHTMLParser()
parser.feed(urllib.request.urlopen("YOUR URL HERE").read().decode("utf-8"))
for gif in parser.gifs:
urllib.request.urlretrieve(*local path to download gif to*, gif)