TypeError:“ mapResult”对象无法使用pathos.multiprocessing

时间:2019-01-09 18:36:04

标签: python-3.x multiprocessing python-3.5 python-multiprocessing pathos

我正在对我拥有的数据集执行拼写校正功能。我用from pathos.multiprocessing import ProcessingPool as Pool做这项工作。处理完成后,我想实际访问结果。这是我的代码:

import codecs
import nltk

from textblob import TextBlob
from nltk.tokenize import sent_tokenize
from pathos.multiprocessing import ProcessingPool as Pool

class SpellCorrect():

    def load_data(self, path_1):
        with codecs.open(path_1, "r", "utf-8") as file:
            data = file.read()
        return sent_tokenize(data)

    def correct_spelling(self, data):
        data = TextBlob(data)
        return str(data.correct())

    def run_clean(self, path_1):
        pool = Pool()
        data = self.load_data(path_1)
        return pool.amap(self.correct_spelling, data)

if __name__ == "__main__":
    path_1 = "../Data/training_data/training_corpus.txt"
    SpellCorrect = SpellCorrect()
    result = SpellCorrect.run_clean(path_1)
    print(result)
    result = " ".join(temp for temp in result)
    with codecs.open("../Data/training_data/training_data_spell_corrected.txt", "a", "utf-8") as file:
        file.write(result)

如果您查看主块,当我执行print(result)时,我会得到一个<multiprocess.pool.MapResult object at 0x1a25519f28>类型的对象。

我尝试使用result = " ".join(temp for temp in result)访问结果,但是随后出现以下错误TypeError: 'MapResult' object is not iterable。我尝试将其类型转换为列表list(result),但仍然是相同的错误。我该怎么做才能解决此问题?

1 个答案:

答案 0 :(得分:0)

multiprocess.pool.MapResult对象不可迭代,因为它是从AsyncResult继承的,并且具有 only 以下方法:

  • 等待([超时]) 等待结果可用或超时秒数过去。此方法始终返回None。

  • ready()返回呼叫是否完成。

  • 成功()返回呼叫是否完成而没有引发 例外。如果结果尚未准备好,将引发AssertionError。

  • get([timeout])返回结果。如果没有超时 无,结果未在超时秒内到达 引发TimeoutError。如果远程呼叫引发异常,则 该异常将由get()作为RemoteError重新引发。

您可以在此处查看示例如何使用get()函数: https://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers

from multiprocessing import Pool, TimeoutError
import time
import os

def f(x):
    return x*x

if __name__ == '__main__':
    pool = Pool(processes=4)              # start 4 worker processes

    # print "[0, 1, 4,..., 81]"
    print pool.map(f, range(10))

    # print same numbers in arbitrary order
    for i in pool.imap_unordered(f, range(10)):
        print i

    # evaluate "f(20)" asynchronously
    res = pool.apply_async(f, (20,))      # runs in *only* one process
    print res.get(timeout=1)              # prints "400"

    # evaluate "os.getpid()" asynchronously
    res = pool.apply_async(os.getpid, ()) # runs in *only* one process
    print res.get(timeout=1)              # prints the PID of that process

    # launching multiple evaluations asynchronously *may* use more processes
    multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
    print [res.get(timeout=1) for res in multiple_results]

    # make a single worker sleep for 10 secs
    res = pool.apply_async(time.sleep, (10,))
    try:
        print res.get(timeout=1)
    except TimeoutError:
        print "We lacked patience and got a multiprocessing.TimeoutError"