计算Python数据帧中的短语频率

时间:2017-05-16 12:14:30

标签: python pandas nltk

我拥有的数据存储在pandas数据框中 - 请参阅下面的可重现示例。真正的数据帧将拥有超过10k行和每行更多的ords /短语。 我想计算每个双字短语出现在ReviewContent列中的次数。如果这是一个文本文件而不是数据帧的列,我会使用NLTK的Collocations模块(类似于答案行herehere)。我的问题是:如何将列ReviewContent转换为单个语料库文本?

import numpy as np
import pandas as pd

data = {'ReviewContent' : ['Great food',
'Low prices but above average food',
'Staff was the worst',
'Great location and great food',
'Really low prices',
'The daily menu is usually great',
'I waited a long time to be served, but it was worth it. Great food']}

df = pd.DataFrame(data)

预期产出:

[(('great', 'food'), 3), (('low', 'prices'), 2), ...]

[('great food', 3), ('low prices', 2)...]

3 个答案:

答案 0 :(得分:2)

我建议使用join:

corpus = ' '.join(df.ReviewContent)

结果如下:

In [102]: corpus
Out[102]: 'Great food Low prices but above average food Staff was the worst Great location and great food Really low prices The daily menu is usually great I waited a long time to be served, but it was worth it. Great food'

答案 1 :(得分:2)

作为序列/可迭代,df["ReviewContent"]的结构与将nltk.sent_tokenize()应用于文本文件的结果完全相同:每个包含一个句子的字符串列表。所以只需以同样的方式使用它。

counts = collections.Counter()
for sent in df["ReviewContent"]:
    words = nltk.word_tokenize(sent)
    counts.update(nltk.bigrams(words))

如果您不确定下一步该怎么做,那就没有使用数据帧了。对于计算双字母组合,您不需要collocations模块,只需要nltk.bigrams()和计数字典。

答案 2 :(得分:1)

使用Pandas版本0.20.1+,您可以直接从稀疏矩阵创建SparseDataFrame:

from sklearn.feature_extraction.text import CountVectorizer

cv = CountVectorizer(ngram_range=(2,2))

r = pd.SparseDataFrame(cv.fit_transform(df.ReviewContent), 
                       columns=cv.get_feature_names(),
                       index=df.index,
                       default_fill_value=0)

结果:

In [52]: r
Out[52]:
   above average  and great  average food  be served  but above  but it  daily menu  great food  great location  \
0              0          0             0          0          0       0           0           1               0
1              1          0             1          0          1       0           0           0               0
2              0          0             0          0          0       0           0           0               0
3              0          1             0          0          0       0           0           1               1
4              0          0             0          0          0       0           0           0               0
5              0          0             0          0          0       0           1           0               0
6              0          0             0          1          0       1           0           1               0

   is usually    ...     staff was  the daily  the worst  time to  to be  usually great  waited long  was the  was worth  \
0           0    ...             0          0          0        0      0              0            0        0          0
1           0    ...             0          0          0        0      0              0            0        0          0
2           0    ...             1          0          1        0      0              0            0        1          0
3           0    ...             0          0          0        0      0              0            0        0          0
4           0    ...             0          0          0        0      0              0            0        0          0
5           1    ...             0          1          0        0      0              1            0        0          0
6           0    ...             0          0          0        1      1              0            1        0          1

   worth it
0         0
1         0
2         0
3         0
4         0
5         0
6         1

[7 rows x 29 columns]

如果您只想将所有行中的字符串连接成一个字符串,请使用Series.str.cat()方法:

text = df.ReviewContent.str.cat(sep=' ')

结果:

In [57]: print(text)
Great food Low prices but above average food Staff was the worst Great location and great food Really low prices The daily me
nu is usually great I waited a long time to be served, but it was worth it. Great food