我正在尝试在一个淡淡的数据帧上做梗
wnl = WordNetLemmatizer()
def lemmatizing(sentence):
stemSentence = ""
for word in sentence.split():
stem = wnl.lemmatize(word)
stemSentence += stem
stemSentence += " "
stemSentence = stemSentence.strip()
return stemSentence
df['news_content'] = df['news_content'].apply(stemming).compute()
但是我遇到以下错误:
AttributeError: 'WordNetCorpusReader' object has no attribute '_LazyCorpusLoader__args'
我已经尝试了推荐的here,但是没有任何运气。
感谢您的帮助。
答案 0 :(得分:1)
这是因为wordnet
模块被“延迟读取”并且尚未评估。
要使其正常工作的一种方法是,首先在Dask数据框中使用WordNetLemmatizer()
一次,例如
>>> from nltk.stem import WordNetLemmatizer
>>> import dask.dataframe as dd
>>> df = dd.read_csv('something.csv')
>>> df.head()
text label
0 this is a sentence 1
1 that is a foo bar thing 0
>>> wnl = WordNetLemmatizer()
>>> wnl.lemmatize('cats') # Use it once first, to "unlazify" wordnet.
'cat'
# Now you can use it with Dask dataframe's .apply() function.
>>> lemmatize_text = lambda sent: [wnl.lemmatize(word) for word in sent.split()]
>>> df['lemmas'] = df['text'].apply(lemmatize_text)
>>> df.head()
text label lemmas
0 this is a sentence 1 [this, is, a, sentence]
1 that is a foo bar thing 0 [that, is, a, foo, bar, thing]
或者,您可以尝试pywsd
:
pip install -U pywsd
然后输入代码:
>>> from pywsd.utils import lemmatize_sentence
Warming up PyWSD (takes ~10 secs)... took 9.131901025772095 secs.
>>> import dask.dataframe as dd
>>> df = dd.read_csv('something.csv')
>>> df.head()
text label
0 this is a sentence 1
1 that is a foo bar thing 0
>>> df['lemmas'] = df['text'].apply(lemmatize_sentence)
>>> df.head()
text label lemmas
0 this is a sentence 1 [this, be, a, sentence]
1 that is a foo bar thing 0 [that, be, a, foo, bar, thing]