Python Scraper文件命名

时间:2016-11-07 00:22:57

标签: python web-scraping

我有一个基于github的LiteScraper的脚本,从http://ifunny.co中抓取模因和GIF

脚本将所有图像保存在带有时间戳的文件夹中,例如" ifunny-(timestamp)"

我正在从http://ifunny.co/feeds/shuffle抓取,所以我每次都会得到一张包含10张图片的随机页面。

问题是,我需要修改脚本,以便将所有图像保存在给定的文件夹名称中。

我试图删除添加时间戳的代码,但问题是每次最多10张图像并刮擦下一页时,10张新图像会覆盖旧图像。

脚本似乎将图像命名为" 1,2,3,4" ECT

以下是代码:

import os
import time
from html.parser import HTMLParser
import urllib.request

#todo: char support for Windows
#deal with triple backslash filter
#recursive parser option


class LiteScraper(HTMLParser):
    def __init__(self):
        HTMLParser.__init__(self)
        self.lastStartTag="No-Tag"
        self.lastAttributes=[]
        self.lastImgUrl=""
        self.Data=[]
        self.acceptedTags=["div","p","h","h1","h2","h3","h4","h5","h6","ul","li","a","img"]
        self.counter=0
        self.url=""


        self.SAVE_DIR="" #/Users/stjepanbrkic/Desktop/temp
        self.Headers=["User-Agent","Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"]

    def handle_starttag(self,tag,attrs):
        #print("Encountered a START tag:",tag)
        self.lastStartTag=tag
        self.lastAttributes=attrs #unnecesarry, might come in hany

        if self.lastStartTag=="img":
            attrs=self.lastAttributes

            for attribute in attrs:
                if attribute[0]=="src":
                    self.lastImgUrl=attribute[1]
                    print(attribute[1])

                    #Allow GIF from iFunny to download
                    for attribute in attrs:
                        if attribute[0]=="data-gif":
                            self.lastImgUrl=attribute[1]
                            print(attribute[1])
                            #End Gif Code

            self.handle_picture(self.lastImgUrl)

    def handle_endtag(self,tag):
        #print("Encountered a END tag:",tag)
        pass

    def handle_data(self,data):
        data=data.replace("\n"," ")
        data=data.replace("\t"," ")
        data=data.replace("\r"," ")
        if self.lastStartTag in self.acceptedTags:
            if not data.isspace():
                print("Encountered some data:",data)
                self.Data.append(data)

        else:
            print("Encountered filtered data.") #Debug

    def handle_picture(self,url):
        print("Bumped into a picture. Downloading it now.")
        self.counter+=1
        if url[:2]=="//":
            url="http:"+url

        extension=url.split(".")
        extension="."+extension[-1]

        try:
            req=urllib.request.Request(url)
            req.add_header(self.Headers[0],self.Headers[1])
            response=urllib.request.urlopen(req,timeout=10)
            picdata=response.read()
            file=open(self.SAVE_DIR+"/pics/"+str(self.counter)+extension,"wb")
            file.write(picdata)
            file.close()
        except Exception as e:
            print("Something went wrong, sorry.")


    def start(self,url):
        self.url=url
        self.checkSaveDir()

        try: #wrapped in exception - if there is a problem with url/server
            req=urllib.request.Request(url)
            req.add_header(self.Headers[0],self.Headers[1])
            response=urllib.request.urlopen(req,timeout=10)
            siteData=response.read().decode("utf-8")
            self.feed(siteData)
        except Exception as e:
            print(e)

        self.__init__()  #resets the parser/scraper for serial parsing/scraping
        print("Done!")

    def checkSaveDir(self):
        #----windows support
        if os.name=="nt":
            container="\ "
            path=os.path.normpath(__file__)
            path=path.split(container[0])
            path=container[0].join(path[:len(path)-1])
            path=path.split(container[0])
            path="/".join(path)
        #no more windows support! :P
        #for some reason, os.normpath returns path with backslashes
        #on windows, so they had to be supstituted with fowardslashes.

        else:
            path=os.path.normpath(__file__)
            path=path.split("/")
            path="/".join(path[:len(path)-1])

        foldername=self.url[7:]
        foldername=foldername.split("/")[0]

        extension=time.strftime("iFunny")+"-"+time.strftime("%d-%m-%Y") + "-" + time.strftime("%Hh%Mm%Ss")

        self.SAVE_DIR=path+"/"+foldername+"-"+extension


        if not os.path.exists(self.SAVE_DIR):
            os.makedirs(self.SAVE_DIR)

        if not os.path.exists(self.SAVE_DIR+"/pics"):
            os.makedirs(self.SAVE_DIR+"/pics")

        print(self.SAVE_DIR)

这就是我正在运行的脚本:

pastebin dot com / PNwJ9wEJ

抱歉,对于pastebin,它不会让我发布我的代码......

我是python的新手,所以我不知道如何解决这个问题。是否有可能做到这一点?

第1页图像名称:(1,2,3,4,5,6,7,8,9,10) 图像名称:(11,12,13 ....)

1 个答案:

答案 0 :(得分:0)

每次实例化解析器时(对于每个新页面),counter都设置为零。这就是图像不断被覆盖的原因。

另一种方法是确定已使用的文件名。

i = 0
while os.path.isfile('your_filename_logic_'+str(i)):
    i += 1
# Now i is the first number which hasn't been used.

但是如果你得到数以千计的图像,这可能没有你想要的那么快。

您可以在LiteScraper完成后将计数器存储在文件中,并在下一个文件启动时将其读回。

def startMyNewCounter(self):
    if os.path.isfile('your_filename_logic_' + 'count'):
        with open('your_filename_logic_'+'count', 'r') as f:
            self.counter = int(next(f))
    else:
        self.counter = 0

def saveMyCounter(self):
    with open('your_filename_logic_'+'count', 'w') as f:
        f.write(str(self.counter) + '\n')

或者最简单的答案:如果您在程序关闭后不关心图像,则可以使计数器成为全局变量,而不是LiteScraper的成员。因此,每个新的LiteScraper都会选择最后一个停止的位置。