所以我在文本文件中有一个单词列表。我想对它们进行词形还原以删除具有相同含义但处于不同时态的词。喜欢尝试,试过等。当我这样做时,我不断收到像TypeError这样的错误:不可用类型:' list'
results=[]
with open('/Users/xyz/Documents/something5.txt', 'r') as f:
for line in f:
results.append(line.strip().split())
lemma= WordNetLemmatizer()
lem=[]
for r in results:
lem.append(lemma.lemmatize(r))
with open("lem.txt","w") as t:
for item in lem:
print>>t, item
如何将已经是令牌的词语变形?
答案 0 :(得分:5)
方法WordNetLemmatizer.lemmatize
可能需要一个字符串,但是你传递一个字符串列表。这会给你TypeError
例外。
line.split()
的结果是一个字符串列表,您将其作为列表附加到results
,即列表列表。
您想使用results.extend(line.strip().split())
results = []
with open('/Users/xyz/Documents/something5.txt', 'r') as f:
for line in f:
results.extend(line.strip().split())
lemma = WordNetLemmatizer()
lem = map(lemma.lemmatize, results)
with open("lem.txt", "w") as t:
for item in lem:
print >> t, item
或在没有中间结果列表的情况下重构
def words(fname):
with open(fname, 'r') as document:
for line in document:
for word in line.strip().split():
yield word
lemma = WordNetLemmatizer()
lem = map(lemma.lemmatize, words('/Users/xyz/Documents/something5.txt'))
答案 1 :(得分:1)
Open a text file and and read lists as results as shown below
fo = open(filename)
results1 = fo.readlines()
results1
['I have a list of words in a text file', ' \n I want to perform lemmatization on them to remove words which have the same meaning but are in different tenses', '']
# Tokenize lists
results2 = [line.split() for line in results1]
# Remove empty lists
results2 = [ x for x in results2 if x != []]
# Lemmatize each word from a list using WordNetLemmatizer
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemma_list_of_words = []
for i in range(0, len(results2)):
l1 = results2[i]
l2 = ' '.join([lemmatizer.lemmatize(word) for word in l1])
lemma_list_of_words.append(l2)
lemma_list_of_words
['I have a list of word in a text file', 'I want to perform lemmatization on them to remove word which have the same meaning but are in different tense']
Please look at the lemmatized difference between lemma_list_of_words and results1.