从熊猫列中删除停用词

时间:2021-01-26 14:04:14

标签: python pandas dataframe tokenize

import nltk
nltk.download('punkt')
nltk.download('stopwords')
import datetime
import numpy as np
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
# Load the Pandas libraries with alias 'pd' 
import pandas as pd 
# Read data from file 'filename.csv' 
# (in the same directory that your python process is based)
# Control delimiters, rows, column names with read_csv (see later) 
data = pd.read_csv("march20_21.csv") 
# Preview the first 5 lines of the loaded data 
#drop NA rows
data.dropna()
#drop all columns not needed
droppeddata = data.drop(columns=['created_at'])
#drop NA rows
alldata = droppeddata.dropna()

ukdata = alldata[alldata.place.str.contains('England')]
ukdata.drop(columns=['place'])

ukdata['text'].apply(word_tokenize)
eng_stopwords = stopwords.words('english') 

我知道有很多冗余变量,但我仍在努力让它工作,然后再回去细化它。

我不确定如何从标记化的列中删除存储在变量中的停用词。任何帮助表示赞赏,我是 Python 的新手!谢谢。

1 个答案:

答案 0 :(得分:0)

  1. 在对列应用函数后,您需要将结果分配回该列,这不是就地操作。

  2. 标记化后 ukdata['text'] 包含 list 个单词,因此您可以在应用中使用 list comprehension 删除停用词。


ukdata['text'] = ukdata['text'].apply(word_tokenize)
eng_stopwords = stopwords.words('english') 
ukdata['text'] = ukdata['text'].apply(lambda words: [word for word in words if word not in eng_stopwords])

最小的例子:
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords

eng_stopwords = stopwords.words('english') 
ukdata = pd.DataFrame({'text': ["This is a sentence."]})

ukdata['text'] = ukdata['text'].apply(word_tokenize)
ukdata['text'] = ukdata['text'].apply(lambda words: [word for word in words if word not in eng_stopwords])