百分比计数动词,名词是否使用Spacy?

时间:2018-08-04 11:41:27

标签: pandas nlp spacy

我想使用类似于

的spacy计算一个句子中POS的拆分百分比

Count verbs, nouns, and other parts of speech with python's NLTK

当前能够检测和计数POS。如何找到百分比拆分。

from __future__ import unicode_literals
import spacy,en_core_web_sm
from collections import Counter
nlp = en_core_web_sm.load()
print Counter(([token.pos_ for token in nlp('The cat sat on the mat.')]))

当前输出:

Counter({u'NOUN': 2, u'DET': 2, u'VERB': 1, u'ADP': 1, u'PUNCT': 1})

预期输出:

Noun: 28.5%
DET: 28.5%
VERB: 14.28%
ADP: 14.28%
PUNCT: 14.28%

如何将输出写入熊猫数据框?

2 个答案:

答案 0 :(得分:1)

遵循这些原则可以满足您的需求:

sbase = sum(c.values())

for el, cnt in c.items():
    print(el, '{0:2.2f}%'.format((100.0* cnt)/sbase))


NOUN 28.57%
DET 28.57%
VERB 14.29%
ADP 14.29%
PUNCT 14.29%

答案 1 :(得分:0)

from __future__ import unicode_literals
import spacy,en_core_web_sm
from collections import Counter
nlp = en_core_web_sm.load()
c = Counter(([token.pos_ for token in nlp('The cat sat on the mat.')]))
sbase = sum(c.values())
for el, cnt in c.items():
    print(el, '{0:2.2f}%'.format((100.0* cnt)/sbase))

输出:

(u'NOUN', u'28.57%')
(u'VERB', u'14.29%')
(u'DET', u'28.57%')
(u'ADP', u'14.29%')
(u'PUNCT', u'14.29%')