嵌套生成器未正确触发

时间:2017-03-25 15:55:37

标签: python generator spacy

新的python生成器我想嵌套它们,即生成器A依赖于生成器B的输出(B生成文件路径,A正在解析文档),但只读取第一个文件。

这是一个最小样本(使用即TREC8all数据)

import itertools
import spacy
from bs4 import BeautifulSoup
import os
def iter_all_files(p):
    for root, dirs, files in os.walk(p):
        for file in files:
            if not file.startswith('.'):
                print('using: ' + str(os.path.join(root, file)))
                yield os.path.join(root, file)


def gen_items(path):
    path = next(path)
    text_file = open(path, 'r').read()
    soup = BeautifulSoup(text_file,'html.parser')
    for doc in soup.find_all("doc"):
        strdoc = doc.docno.string.strip()
        text_only = str(doc.find_all("text")[0])
        yield (strdoc, text_only)


file_counter = 0
g = iter_all_files("data/TREC8all/Adhoc")
gen1, gen2 = itertools.tee(gen_items(g))
ids = (id_ for (id_, text) in gen1)
texts = (text for (id_, text) in gen2)
docs = nlp.pipe(texts, batch_size=50, n_threads=4)

for id_, doc in zip(ids, docs):
    file_counter += 1
file_counter

这将只输出

using: data/TREC8all/Adhoc/fbis/fb396002
Out[10]:
33

以下显示肯定会有更多文件需要解析:

g = iter_all_files("data/TREC8all/Adhoc")
file_counter = 0
for file in g:
    file_counter += 1
    # print(file)
    for item in gen_items(g):
        item_counter += 1

print(item_counter)
file_counter

将返回大约2000个文件,如

using: data/TREC8all/Adhoc/fbis/fb396002
using: data/TREC8all/Adhoc/fbis/fb396003
using: data/TREC8all/Adhoc/fbis/fb396004
using: data/TREC8all/Adhoc/fbis/fb396005
using: data/TREC8all/Adhoc/fbis/fb396006
using: data/TREC8all/Adhoc/fbis/fb396007
using: data/TREC8all/Adhoc/fbis/fb396008
using: data/TREC8all/Adhoc/fbis/fb396009
using: data/TREC8all/Adhoc/fbis/fb396010
using: data/TREC8all/Adhoc/fbis/fb396011
using: data/TREC8all/Adhoc/fbis/fb396012
using: data/TREC8all/Adhoc/fbis/fb396013

显然是我的

g = iter_all_files("data/TREC8all/Adhoc")
gen1, gen2 = itertools.tee(gen_items(g))
ids = (id_ for (id_, text) in gen1)
texts = (text for (id_, text) in gen2)
docs = nlp.pipe(texts, batch_size=50, n_threads=4)

for id_, doc in zip(ids, docs):

没有以正确的方式使用嵌套的生成器。

修改

带有外部for循环的嵌套似乎可以工作,但不是很好。是否有更好的方法来制定它?

g = iter_all_files("data/TREC8all/Adhoc")
for file in g:
    file_counter += 1
    # print(file)
    #for item in gen_items(g):
    gen1, gen2 = itertools.tee(genFiles(g)

2 个答案:

答案 0 :(得分:1)

  

但只读取第一个文件

好吧,你只告诉Python读一个文件:

def gen_items(path):
    path = next(path)
    ...

如果您想查看所有文件,则需要循环。

def gen_items(paths):
    for path in paths:
        ...

答案 1 :(得分:0)

回顾了代码,我不知道“nlp.pipe”的意思,试试这个

#docs = nlp.pipe(texts, batch_size=50, n_threads=4)
for id_, doc in zip(ids, texts ):
    file_counter += 1
file_counter 

看到“file_counter”,你就会知道错误。