多处理时如何解决“无法序列化'_io.BufferedReader'对象”?

时间:2019-05-31 12:52:14

标签: python web-scraping multiprocessing

尝试从网站解析大量网页时出现以下错误:“原因:'TypeError(“无法序列化'_io.BufferedReader'对象”,)'。如何解决?

完整的错误消息是:

File "main.py", line 29, in <module>
    records = p.map(defs.scrape,state_urls)
  File "C:\Users\Utilisateur\Anaconda3\lib\multiprocessing\pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "C:\Users\Utilisateur\Anaconda3\lib\multiprocessing\pool.py", line 644, in get
    raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x0000018DD1C3D828>'. Reason: 'TypeError("cannot serialize '_io.BufferedReader' object",)'

我在这里浏览了一些类似问题的答案,即这个问题(multiprocessing.pool.MaybeEncodingError: Error sending result: Reason: 'TypeError("cannot serialize '_io.BufferedReader' object",)'),但我认为我并没有遇到同样的问题,因为我没有直接在文件中处理文件。刮擦功能。

我试图修改scrape函数,以便它返回一个字符串而不是一个列表(不知道为什么这样做),但是我没有用。

从main.py文件中:

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup 
from multiprocessing import Pool
import codecs
import defs
if __name__ == '__main__':

    filename = "some_courts_test.csv"
        # not the actual values
    courts = ["blabla", "blablabla", "blablabla","blabla"]

    client = defs.init_client()
    i = 1

    # scrapes the data from the website and puts it into a csv file
    for court in courts:
        records = []
        records_string =""
        print("creating a file for the court of : "+court)
        f = defs.init_court_file(court)
        print("generating urls for the court of "+court)        
        state_urls = defs.generate_state_urls(court)
        for url in state_urls:
            print(url)
        print("scraping creditors from : "+court)
        p = Pool(10)

        records = p.map(defs.scrape,state_urls)
        records_string = ''.join(records[1])
        p.terminate()
        p.join()
        for r in records_string:
            f.write(r)
        records = []

        f.close()

来自defs文件:

def scrape(url):
        data = []
        row_string = ' '
        final_data = []
        final_string = ' '
        uClient = uReq(url)
        page_html = uClient.read()
        uClient.close()

        page_soup = soup(page_html, "html.parser")
        table = page_soup.find("table", {"class":"table table-striped"})
        table_body = table.find('tbody')
        rows = table_body.find_all('tr')
        for row in rows:
            cols = row.find_all('td')
            cols = [ele.text.replace(',',' ') for ele in cols] #cleans it up

            for ele in cols:
                if ele:
                    data.append(ele)
                data.append(',')
            data.append('\n')
        return(data)

0 个答案:

没有答案