我正在一个项目中,第一步是在大型文本语料库中搜索关键字和短语。我想找出出现这些关键字的段落/句子。稍后,我想使这些段落可以通过我的本地postgres db访问,以供用户查询信息。数据存储在Azure Blob存储上,我正在使用Minio Server连接我的Django应用程序。
首先,我的shell被杀死,并且在运行以下脚本时,经过几次尝试和错误的重构/调试内存错误:
起初,炮弹被杀死。我查看了日志文件,发现这是一个内存错误,但是老实说,我对这个话题还很陌生。
重新排列代码后,我在外壳内部直接遇到了MemoryError。在将文本流式传输到spaCy的language.pipe()步骤中。
功能
# Function that samples filing_documents
def random_Filings(amount):
...
return random_list
# Function that connects to storage and saves cleaned text
def get_clean_text(random_list):
try:
text_contents = S3Client().get_buffer(remote_path)
...
return clean_list
# matcher function that performs action on match of PhraseMatcher
def on_match(matcher, doc, id, matches):
matcher_id, start, end = matches[id]
rule_id = nlp.vocab.strings[match_id]
token = doc[start]
sent_of_token = token.sent
match_list.append([str(rule_id), sent_of_token.start, sent_of_token,
doc.user_data])
def match_text_stream(clean_texts):
some_pattern = [nlp(text) for text in ('foo', 'bar')]
some_other_pattern = [nlp(text) for text in ('foo bar', 'barara')]
matcher = PhraseMAtcher(nlp.vocab)
matcher.add('SOME', on_match, *some_pattern)
matcher.add('OTHER', on_match, *some_other_pattern)
doc_list = []
for doc in nlp.pipe(list_of_text, barch_size=30):
doc_list.append(doc)
for doc in matcher.pipi(doc_list, batch_size=30):
pass
问题步骤:
match_list = []
nlp = en_core_web_sm.load()
sample_list = random_Filings(30)
clean_texts = get_clean_text(sample_list)
match_text_stream(clean_text)
print(match_list)
MemoryError
<string> in in match_text_stream(clean_text)
../spacy/language.py in pipe(self, texts, as_tubles, n thready, batch_size, disable cleanup, component_cfg)
709 origingal_strings_data = None
710 nr_seen = 0
711 for doc in docs:
712 yield doc
713 if cleanup:
...
MemoryError
../tick/neural/_classes/convolution.py in begin_update(self, X__bi, drop)
31
32 def(bedin_update(self,X__bi, drop=0.0):
33 X__bo = self.ops.seqcol(X__bi, self.nW)
34 finish_update = self._get_finsih_update()
35 return X__bo, finish_update
ops.pyx in thinc.neural.ops.NumpyOps.seq2col()
ops.pyx in thinc.neural.ops.NumpyOps.allocate()
MemoryError:
答案 0 :(得分:0)
解决方案是在培训之前将文档切成小块。段落单元工作得很好,或者可能是节。