如何正确调试ThreadPool?

时间:2016-02-01 21:51:21

标签: python multithreading multiprocessing threadpool

我试图从网页上获取一些数据。为了加快这个过程(它们允许我每分钟发出1000个请求),我使用ThreadPool

由于存在大量数据,因此该过程非常容易受到连接失败等影响。所以我尝试记录所有能够检测到我在代码中所犯的每个错误的内容。

问题是程序有时会在没有任何异常的情况下停止(它的行为就像它正在运行但没有效果 - 我使用PyCharm)。我可以在任何地方记录捕获的异常,但我无法在任何日志中看到任何异常。

我假设如果达到超时,则会引发并记录异常。

我发现了问题所在。这是代码:

作为游泳池,我使用:from multiprocessing.pool import ThreadPool as Pool 并锁定:from threading import Lock

正在循环使用download_category函数。

    def download_category(url):
        # some code
        #
        # ...

        log('Create pool...')
        _pool = Pool(_workers_number)

        with open('database/temp_produkty.txt') as f:
            log('Spracovavanie produktov... vytvaranie vlakien...') # I see this in log
            for url_product in f:
                x = _pool.apply_async(process_product, args=(url_product.strip('\n'), url))
            _pool.close()
            _pool.join()

            log('Presuvanie produktov z temp export do export.csv...') # I can't see this in log
            temp_export_to_export_csv()
            set_spracovanie_kategorie(url)
    except Exception as e:
        logging.exception('Got exception on download_one_category: {}'.format(url))

并且process_product功能:

def process_product(url, cat):
    try:
        data = get_product_data(url)
    except:
        log('{}: {} exception while getting product data... #') # I don't see this in log
        return
    try:
        print_to_temp_export(data, cat) # I don't see this in log
    except:
        log('{}: {} exception while printing to csv... #') # I don't see this in log
        raise

日志功能:

def log(text):
    now = datetime.now().strftime('%d.%m.%Y %H:%M:%S')
    _lock.acquire()
    mLib.printToFile('logging/log.log', '{} -> {}'.format(now, text))
    _lock.release()

我也使用logging模块。在这个日志中,我看到可能已经发送了8个(工人数量)次请求,但没有收到任何答复。

EDIT1:

def get_product_data(url):
    data = defaultdict(lambda: '-')

    root = load_root(url)
    try:
        nazov = root.xpath('//h1[@itemprop="name"]/text()')[0]
    except:
        nazov = root.xpath('//h1/text()')[0]

    under_block = root.xpath('//h2[@id="lowest-cost"]')

    if len(under_block) < 1:
        under_block = root.xpath('//h2[contains(text(),"Naj")]')
        if len(under_block) < 1:
            return False

    data['nazov'] = nazov
    data['url'] = url

    blocks = under_block[0].xpath('./following-sibling::div[@class="shp"]/div[contains(@class,"shp")]')

    i = 0

    for block in blocks:
        i += 1
        data['dat{}_men'.format(i)] = eblock.xpath('.//a[@class="link"]/text()')[0]

    del root
    return data

LOAD ROOT:

class RedirectException(Exception):
    pass

def load_url(url):
    r = requests.get(url, allow_redirects=False)
    if r.status_code == 301:
        raise RedirectException
    if r.status_code == 404:
        if '-q-' in url:
            url = url.replace('-q-','-')
            mLib.printToFileWOEncoding('logging/neexistujuce.txt','Skusanie {} kategorie...'.format(url))
            return load_url(url) # THIS IS NOT LOOPING 
        else:
            mLib.printToFileWOEncoding('logging/neexistujuce.txt','{}'.format(url))
    html = r.text
    return html


def load_root(url):
    try:
        html = load_url(url)
    except Exception as e:
        logging.exception('load_root_exception')
        raise
    return etree.fromstring(html, etree.HTMLParser())

0 个答案:

没有答案