用ngram范围标记

时间:2018-10-07 14:00:40

标签: python string split nltk tokenize

有什么方法可以用ngram范围标记字符串?就像当您从CountVectorizer获得功能时一样。例如,(w ngram range =(1,2)):

strings = ['this is the first sentence','this is the second sentence']

[['this','this is','is','is the','the','the first',''first','first sentence','sentence'],['this','this is','is','is the','the','the second',''second','second sentence','sentence']]

更新:我遍历n:

sentence = 'this is the first sentence'

nrange_array = []
    for n in range(1,3):
        nrange = ngrams(sentence.split(),n)
        nrange_array.append(nrange)

for nrange in nrange_array:
    for grams in nrange:
        print(grams)

输出:

('this',)
('is',)
('the',)
('first',)
('sentence',)
('this', 'is')
('is', 'the')
('the', 'first')
('first', 'sentence')

我想要:

('this','this is','is','is the','the','the first','first','first sentence','sentence')

1 个答案:

答案 0 :(得分:0)

我希望代码可以为您提供帮助。

x = "this is the first sentence"
words = x.split()
result = []

for index, word in enumerate(words):
      result.append(word)

  if index is not len(words) - 1:
        result.append(" ".join([word, words[index + 1]]))

print(result) # Output: ["this", "this is", ...]