我需要预处理一些文本文档,以便我可以应用fcm e.t.c等分类技术和其他主题建模技术,如潜在的dirichlet分配e.t.c
为了详细说明预处理,我需要删除停用词,提取名词和关键词并执行词干。我用于此目的的代码是:
#--------------------------------------------------------------------------
#Extracting nouns
#--------------------------------------------------------------------------
for i in range (0,len(a)) :
x=a[i]
text=nltk.pos_tag(nltk.Text(nltk.word_tokenize(x)))
for noun in text:
if(noun[1]=="NN" or noun[1]=="NNS"):
temp+=noun[0]
temp+=' '
documents.append(temp)
print documents
#--------------------------------------------------------------------------
#remove unnecessary words and tags
#--------------------------------------------------------------------------
texts = [[word for word in document.lower().split() if word not in stoplist]for document in documents]
allTokens = sum(texts, [])
tokensOnce = set(word for word in set(allTokens) if allTokens.count(word)== 0)
texts = [[word for word in text if word not in tokensOnce]for text in texts]
print texts
#--------------------------------------------------------------------------
#Stemming
#--------------------------------------------------------------------------
for i in texts:
for j in range (0,len(i)):
k=porter.stem(i[j])
i[j]=k
print texts
上面提到的代码的问题是
是否有更好的模块用于所需的功能,或者是否有更好的相同模块实现? 请帮助