如何在数据框中使用word_tokenize

时间:2015-10-13 08:44:55

标签: python pandas nltk

我最近开始使用nltk模块进行文本分析。我陷入了困境。我想在数据帧上使用word_tokenize,以便获得数据帧的特定行中使用的所有单词。

data example:
       text
1.   This is a very good site. I will recommend it to others.
2.   Can you please give me a call at 9983938428. have issues with the listings.
3.   good work! keep it up
4.   not a very helpful site in finding home decor. 

expected output:

1.   'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2.   'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3.   'good','work','!','keep','it','up'
4.   'not','a','very','helpful','site','in','finding','home','decor'

基本上,我想将所有单词分开并找到数据框中每个文本的长度。

我知道word_tokenize可以用于字符串,但是如何将它应用到整个数据帧?

请帮忙!

提前致谢...

4 个答案:

答案 0 :(得分:16)

您可以使用DataFrame API的 apply 方法:

.class

输出:

"Decompiled .class file, bytecode version: 49.0(Java 5.0)"
"Sources for 'Android API 11 Platform' not found."

要查找每个文本的长度,请尝试再次使用 apply lambda函数

import pandas as pd
import nltk

df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)

答案 1 :(得分:16)

pandas.Series.apply 比pandas.DataFrame.apply更快

import pandas as pd
import nltk

df = pd.read_csv("/path/to/file.csv")

start = time.time()
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
print "series.apply", (time.time() - start)

start = time.time()
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
print "dataframe.apply", (time.time() - start)

在示例125 MB csv文件

series.apply 144.428858995

dataframe.apply 201.884778976

修改:在 series.apply(nltk.word_tokenize)之后,您可能会认为Dataframe df 的尺寸较大,这可能会影响下一个操作的运行时 dataframe.apply(nltk.word_tokenize)

熊猫为这种情况进行了优化。通过仅分别执行dataframe.apply(nltk.word_tokenize),我得到了类似 200s 的运行时。

答案 2 :(得分:2)

我将向您展示一个示例。假设您有一个名为 twitter_df 数据框,并且已在其中存储了情绪和文本。所以,首先我将文本数据提取到列表中,如下所示:

 tweetText = twitter_df['text']

然后令牌化

 from nltk.tokenize import word_tokenize

 tweetText = tweetText.apply(word_tokenize)
 tweetText.head()

我认为这会对您有所帮助

答案 3 :(得分:0)

可能需要添加str()才能将熊猫的对象类型转换为字符串。

记住,计算单词的更快方法通常是计算空格。

有趣的是令牌生成器对期间进行计数。可能要先删除那些,也可能要删除数字。至少在这种情况下,取消注释下面的行将导致计数相等。

import nltk
import pandas as pd

sentences = pd.Series([ 
    'This is a very good site. I will recommend it to others.',
    'Can you please give me a call at 9983938428. have issues with the listings.',
    'good work! keep it up',
    'not a very helpful site in finding home decor. '
])

# remove anything but characters and spaces
sentences = sentences.str.replace('[^A-z ]','').str.replace(' +',' ').str.strip()

splitwords = [ nltk.word_tokenize( str(sentence) ) for sentence in sentences ]
print(splitwords)
    # output: [['This', 'is', 'a', 'very', 'good', 'site', 'I', 'will', 'recommend', 'it', 'to', 'others'], ['Can', 'you', 'please', 'give', 'me', 'a', 'call', 'at', 'have', 'issues', 'with', 'the', 'listings'], ['good', 'work', 'keep', 'it', 'up'], ['not', 'a', 'very', 'helpful', 'site', 'in', 'finding', 'home', 'decor']]

wordcounts = [ len(words) for words in splitwords ]
print(wordcounts)
    # output: [12, 13, 5, 9]

wordcounts2 = [ sentence.count(' ') + 1 for sentence in sentences ]
print(wordcounts2)
    # output: [12, 13, 5, 9]

如果您不使用熊猫,则可能不需要str()