引发异常后WorkerPool停止

时间:2013-10-21 10:22:40

标签: python multithreading urllib

我有这段代码:

folder = "/Users/foreigner/PycharmProjects/Selenium/urls1"
files = [f for f in listdir(folder) if isfile(join(folder, f))]


class DownloadJob(workerpool.Job):
    def __init__(self, url, save_to):
        self.url = url
        self.to = save_to

    def run(self):
        urllib.urlretrieve(self.url, self.to)



for file in files:
    pool = workerpool.WorkerPool(5)
    name, ext = splitext(file)
    if ext != '.txt':
        continue
    else:
        try:
            urls = (url for url in open(join(folder, file)).readlines())
            for url in urls:
                dir_for_car = "/Users/foreigner/PycharmProjects/Selenium/urls1/media/{0}".format('_'.join(name.split()))
                ensure_dir(dir_for_car)
                if not os.path.exists(dir_for_car):
                    os.makedirs(dir_for_car)
                if not os.path.exists(dir_for_car):
                    print "error"
                    sys.exit(0)
                file_for_image = "{0}_{2}_{1}.jpg".format(name, url.strip()[-5:], random_id(10))
                job = DownloadJob(url.strip(), join(dir_for_car, file_for_image))
                pool.put(job)
                log(name, join(dir_for_car, file_for_image))

        except:
            print "something went wrong"
            import traceback
            import sys
            type_, value_, trace_ = sys.exc_info()
            print type_, value_
            print traceback.format_tb(trace_)
        finally:
            pool.shutdown()
            pool.wait()

问题是当我从urlretrieve获得异常时,它没有被捕获,我的代码在finally块停止。此代码处理一个文件,获取错误并停止。如何继续使用其他文件并记录异常的错误消息?

0 个答案:

没有答案