计算pandas数据帧中单词的频率

时间:2017-10-17 08:54:33

标签: python pandas nltk

我有一张如下表:

      URN                   Firm_Name
0  104472               R.X. Yah & Co
1  104873        Big Building Society
2  109986          St James's Society
3  114058  The Kensington Society Ltd
4  113438      MMV Oil Associates Ltd

我想计算Firm_Name列中所有单词的频率,以获得如下输出:

enter image description here

我尝试过以下代码:

import pandas as pd
import nltk
data = pd.read_csv("X:\Firm_Data.csv")
top_N = 20
word_dist = nltk.FreqDist(data['Firm_Name'])
print('All frequencies')
print('='*60)
rslt=pd.DataFrame(word_dist.most_common(top_N),columns=['Word','Frequency'])

print(rslt)
print ('='*60)

但是,以下代码不会产生唯一的字数。

3 个答案:

答案 0 :(得分:32)

IIUIC,使用value_counts()

In [3361]: df.Firm_Name.str.split(expand=True).stack().value_counts()
Out[3361]:
Society       3
Ltd           2
James's       1
R.X.          1
Yah           1
Associates    1
St            1
Kensington    1
MMV           1
Big           1
&             1
The           1
Co            1
Oil           1
Building      1
dtype: int64

或者,

pd.Series(np.concatenate([x.split() for x in df.Firm_Name])).value_counts()

或者,

pd.Series(' '.join(df.Firm_Name).split()).value_counts()

对于前N名,例如3

In [3379]: pd.Series(' '.join(df.Firm_Name).split()).value_counts()[:3]
Out[3379]:
Society    3
Ltd        2
James's    1
dtype: int64

详细

In [3380]: df
Out[3380]:
      URN                   Firm_Name
0  104472               R.X. Yah & Co
1  104873        Big Building Society
2  109986          St James's Society
3  114058  The Kensington Society Ltd
4  113438      MMV Oil Associates Ltd

答案 1 :(得分:4)

您首先需要str.cat lower才能将所有值标记为一个string,然后需要word_tokenize并最后使用您的解决方案:

top_N = 4
#if not necessary all lower
a = data['Firm_Name'].str.lower().str.cat(sep=' ')
words = nltk.tokenize.word_tokenize(a)
word_dist = nltk.FreqDist(words)
print (word_dist)
<FreqDist with 17 samples and 20 outcomes>

rslt = pd.DataFrame(word_dist.most_common(top_N),
                    columns=['Word', 'Frequency'])
print(rslt)
      Word  Frequency
0  society          3
1      ltd          2
2      the          1
3       co          1

如有必要,也可以删除lower

top_N = 4
a = data['Firm_Name'].str.cat(sep=' ')
words = nltk.tokenize.word_tokenize(a)
word_dist = nltk.FreqDist(words)
rslt = pd.DataFrame(word_dist.most_common(top_N),
                    columns=['Word', 'Frequency'])
print(rslt)
         Word  Frequency
0     Society          3
1         Ltd          2
2         MMV          1
3  Kensington          1

答案 2 :(得分:0)

也可以使用此答案-Count distinct words from a Pandas Data Frame。它利用Counter方法并将其应用于每一行。

from collections import Counter
c = Counter()
df = pd.DataFrame(
    [[104472,"R.X. Yah & Co"],
    [104873,"Big Building Society"],
    [109986,"St James's Society"],
    [114058,"The Kensington Society Ltd"],
    [113438,"MMV Oil Associates Ltd"]
], columns=["URN","Firm_Name"])
df.Firm_Name.str.split().apply(c.update)

Counter({'R.X.': 1,
         'Yah': 1,
         '&': 1,
         'Co': 1,
         'Big': 1,
         'Building': 1,
         'Society': 3,
         'St': 1,
         "James's": 1,
         'The': 1,
         'Kensington': 1,
         'Ltd': 2,
         'MMV': 1,
         'Oil': 1,
         'Associates': 1})