我是python的新手,我的脚本基于此:
的github / whiteShtef / LiteScraper
它被用于从http://ifunny.co
中抓取图像和GIF问题是,脚本将图像保存在单独的文件夹中。
这是命名文件夹的代码:
foldername=self.url[7:]
foldername=foldername.split("/")[0]
extension=time.strftime("iFunny")+"-"+time.strftime("%d-%m-%Y") + "-" + time.strftime("%Hh%Mm%Ss")
它会在foldername iFunny之后加上时间戳。
我需要的是它可以将所有下载保存到文件夹“images”
我试过把它简单地保存到文件夹“images”,但问题是它刮擦不同的页面,图像得到相同的名称,并且它们会相互覆盖。
例如,如果它刮擦第1页,它将从中下载图像(我很确定每页10张图像/ GIF),它会将它们命名为1,2,3,4等......
然后它刮擦第2页并将它们命名为1,2,3,4等......并覆盖第1页的旧图像。
这是完整的代码:
import os
import time
from html.parser import HTMLParser
import urllib.request
#todo: char support for Windows
#deal with triple backslash filter
#recursive parser option
class LiteScraper(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.lastStartTag="No-Tag"
self.lastAttributes=[]
self.lastImgUrl=""
self.Data=[]
self.acceptedTags=["div","p","h","h1","h2","h3","h4","h5","h6","ul","li","a","img"]
self.counter=0
self.url=""
self.SAVE_DIR=""
self.Headers=["User-Agent","Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"]
def handle_starttag(self,tag,attrs):
#print("Encountered a START tag:",tag)
self.lastStartTag=tag
self.lastAttributes=attrs #unnecesarry, might come in hany
if self.lastStartTag=="img":
attrs=self.lastAttributes
for attribute in attrs:
if attribute[0]=="src":
self.lastImgUrl=attribute[1]
print(attribute[1])
#Allow GIF from iFunny to download
for attribute in attrs:
if attribute[0]=="data-gif":
self.lastImgUrl=attribute[1]
print(attribute[1])
#End Gif Code
self.handle_picture(self.lastImgUrl)
def handle_endtag(self,tag):
#print("Encountered a END tag:",tag)
pass
def handle_data(self,data):
data=data.replace("\n"," ")
data=data.replace("\t"," ")
data=data.replace("\r"," ")
if self.lastStartTag in self.acceptedTags:
if not data.isspace():
print("Encountered some data:",data)
self.Data.append(data)
else:
print("Encountered filtered data.") #Debug
def handle_picture(self,url):
print("Bumped into a picture. Downloading it now.")
self.counter+=1
if url[:2]=="//":
url="http:"+url
extension=url.split(".")
extension="."+extension[-1]
try:
req=urllib.request.Request(url)
req.add_header(self.Headers[0],self.Headers[1])
response=urllib.request.urlopen(req,timeout=10)
picdata=response.read()
file=open(self.SAVE_DIR+"/pics/"+str(self.counter)+extension,"wb")
file.write(picdata)
file.close()
except Exception as e:
print("Something went wrong, sorry.")
def start(self,url):
self.url=url
self.checkSaveDir()
try: #wrapped in exception - if there is a problem with url/server
req=urllib.request.Request(url)
req.add_header(self.Headers[0],self.Headers[1])
response=urllib.request.urlopen(req,timeout=10)
siteData=response.read().decode("utf-8")
self.feed(siteData)
except Exception as e:
print(e)
self.__init__() #resets the parser/scraper for serial parsing/scraping
print("Done!")
def checkSaveDir(self):
#----windows support
if os.name=="nt":
container="\ "
path=os.path.normpath(__file__)
path=path.split(container[0])
path=container[0].join(path[:len(path)-1])
path=path.split(container[0])
path="/".join(path)
#no more windows support! :P
#for some reason, os.normpath returns path with backslashes
#on windows, so they had to be supstituted with fowardslashes.
else:
path=os.path.normpath(__file__)
path=path.split("/")
path="/".join(path[:len(path)-1])
foldername=self.url[7:]
foldername=foldername.split("/")[0]
extension=time.strftime("iFunny")+"-"+time.strftime("%d-%m-%Y") + "-" + time.strftime("%Hh%Mm%Ss")
self.SAVE_DIR=path+"/"+foldername+"-"+extension
if not os.path.exists(self.SAVE_DIR):
os.makedirs(self.SAVE_DIR)
if not os.path.exists(self.SAVE_DIR+"/pics"):
os.makedirs(self.SAVE_DIR+"/pics")
print(self.SAVE_DIR)
我不确定该怎么做,任何帮助都会很棒!
答案 0 :(得分:0)
(删除时间戳部分后) 看起来self.counter是确定文件名的值。创建LiteScraper对象时,它设置为零。如果在移动到下一页时重用LiteScraper对象,它应该继续计数而不是从零开始。
而是在同一个对象上再次调用start()。像这样:
int