我正在寻找一种解决方案,以使用Pandas数据框文本列上的NLTK语料库删除英语停用词。我们可以使用dataframe apply方法来做到吗,如果可以,请分享一下?
stop_words = set(stopwords.words('english'))
data['text'] = data['text'].apply(lambda text: " ".join(w) for w in text.lower().split() if w not in stop_words)
如果有人能回答,谢谢并感谢。
答案 0 :(得分:0)
您可以标记文本列(或简单地分成单词列表),然后使用map
或apply
方法删除停用词。
例如:
data = pd.DataFrame({'text': ['a sentence can have stop words', 'stop words are common words like if, I, you, a, etc...']})
data
text
0 a sentence can have stop words
1 stop words are common words like if, I, you, a...
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer('\w+')
stop_words = stopwords.words('english')
def clean(x):
doc = tokenizer.tokenize(x.lower())
return [w for w in doc if w in stop_words]
data.text.map(clean)
0 [sentence, stop, words]
1 [stop, words, common, words, like, etc]
Name: text, dtype: object