说我有一个元组列表,top_n
,在文本语料库中找到的最常见的n
最常见的双字母组合:
import nltk
from nltk import bigrams
from nltk import FreqDist
bi_grams = bigrams(text) # text is a list of strings (tokens)
fdistBigram = FreqDist(bi_grams)
n = 300
top_n= [list(t) for t in zip(*fdistBigram.most_common(n))][0]; top_n
>>> [('let', 'us'),
('us', 'know'),
('as', 'possible')
....
现在,我想替换top_n
中包含串联的单词集合的实例。例如,假设我们有一个新的变量query
,它是一个字符串列表:
query = ['please','let','us','know','as','soon','as','possible']
会变成
['please','letus', 'usknow', 'as', 'soon', 'aspossible']
完成所需操作后。更明确地说,我想搜索query
的每个元素并检查第i个和第(i + 1)个元素是否在top_n
中;如果是,则将query[i]
和query[i+1]
替换为单个连接的二元组(query[i], query[i+1]) -> query[i] + query[i+1]
。
有没有办法使用NLTK执行此操作,或者如果需要在query
中循环每个单词,最好的方法是什么?
答案 0 :(得分:1)
鉴于你的代码和查询,如果单词是top_n
,那么单词将被贪婪地替换为bi-gram,这样就可以了:
lookup = set(top_n) # {('let', 'us'), ('as', 'soon')}
query = ['please', 'let', 'us', 'know', 'as', 'soon', 'as', 'possible']
answer = []
q_iter = iter(range(len(query)))
for idx in q_iter:
answer.append(query[idx])
if idx < (len(query) - 1) and (query[idx], query[idx+1]) in lookup:
answer[-1] += query[idx+1]
next(q_iter)
# if you don't want to skip over consumed
# second bi-gram elements and keep
# len(query) == len(answer), don't advance
# the iterator here, which also means you
# don't have to create the iterator in outer scope
print(answer)
结果(例如):
>> ['please', 'letus', 'know', 'assoon', 'as', 'possible']
答案 1 :(得分:0)
替代答案:
from gensim.models.phrases import Phraser
from gensim.models import Phrases
phrases = Phrases(text, min_count=1500, threshold=0.01)
bigram = Phraser(phrases)
bigram[query]
>>> ['please', 'let_us', 'know', 'as', 'soon', 'as', 'possible']
不完全是问题中所需的所需输出,但它可以作为替代方案。输入min_count
和threshold
将极大地影响输出。感谢this question here。