使用永无休止的线程使用Newspaper3k(python3 lib)处理URL列表

时间:2018-10-10 10:26:12

标签: python python-3.x python-multithreading python-newspaper

脚本读取URL列表,我将该列表传递到队列中,然后使用python-newspaper3k处理它们。我有很多不同的URL,其中许多不是很受欢迎的网站。问题是处理永远不会结束。有时它到达了尽头,但是有一些过程可以解决一些问题,需要停止。问题是当python-newspaper尝试解析每个HTML时。代码是

这里 我将网址加载到队列中,然后使用报纸下载并解析每个HTML。

def grab_data_from_queue():
    #while not q.empty(): # check that the queue isn't empty
    while True:
        if q.empty():
            break
        #print q.qsize()
        try:
            urlinit = q.get(timeout=10) # get the item from the queue
            if urlinit is None:
                print('urlinit is None')
                q.task_done()
            url = urlinit.split("\t")[0]
            url = url.strip('/')
            if ',' in url:
                print(', in url')
                q.task_done()
            datecsv = urlinit.split("\t\t\t\t\t")[1]
            url2 = url
            time_started = time.time()
            timelimit = 2
            #page = requests.get(url)
            #page.raise_for_status()

            #print "Trying: " + str(url)

            if len(url) > 30:

                if photo == 'wp':
                    article = Article(url, browser_user_agent = 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0')
                else:
                    article = Article(url, browser_user_agent = 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0', fetch_images=False)
                    imgUrl = ""

                #response = get(url, timeout=10)
                #article.set_html(response.content)

                article.download()
                article.parse()
                print(str(q.qsize()) + " parse passed")

然后我进行线程

for i in range(4): # aka number of threadtex
    try:
        t1 = Thread(target = grab_data_from_queue,) # target is the above function
        t1.setDaemon(True)
        t1.start() # start the thread
    except Exception as e:
        exc_type, exc_obj, exc_tb = sys.exc_info()
        print(str(exc_tb.tb_lineno) + ' => ' + str(e))


q.join()

有没有一种方法可以找到哪个URL有问题并且退出需要很长时间?如果找不到URL,是否可以停止线程守护程序?

0 个答案:

没有答案