这是我的熊猫提供的数据样本
word = pd.Series[['a', 'b', 'c', 'd'],['b', 'c'],['c', 'd'],['a', 'b', 'c']]
我想获取频率(1)和语料数据(2)
(1)频率(排序)
b : 3
c : 3
d : 2
a : 2
(2)语料库数据(未排序)
corpus = ['a b c d', 'b c', 'c d', 'a b c']
如何获得这些?我需要帮助
我将python用于朝鲜语NLP:这是我的代码
import numpy as np
import pandas as pd
import itertools as it
from khaiii import KhaiiiApi # Korean NLP
df = pd.read_csv('https://drive.google.com/u/0/uc?id=1IZ1NYJmbabv6Xo7WJeqRcDFl1Z5pumni&export=download', encoding = 'utf-8')
df = pd.DataFrame(df)
api = KhaiiiApi()
def parse(sentence):
pos = ((morph.lex, morph.tag) for word in api.analyze(sentence) for morph in word.morphs if morph.tag in ['NNG', 'VV', 'VA', 'NNP']) # only nng, vv, va
words = [item[0] if item[1] == 'NNG' or item[1] == 'NNP' else f'{item[0]}다' for item in pos] # append suffix
return words
df['내용'] = df["내용"].str.replace(",", "")
split = df.내용.str.split(".")
split = split.apply(lambda x: pd.Series(x))
split = split.stack().reset_index(level=1,drop=True).to_frame('sentences')
df = df.merge(split, left_index=True, right_index=True, how='left')
df = df.drop(['내용'], axis = 1)
df['sentences'].replace('', np.nan, inplace= True)
df['sentences'].replace(' ', np.nan, inplace= True)
df.dropna(subset=['sentences'], inplace=True)
df['reconstruct'] = df['sentences'].apply(parse)
答案 0 :(得分:0)
您可以在value_counts
之后使用explode
(熊猫0.25 +
word.explode().value_counts()
c 4
b 3
d 2
a 2
dtype: int64
您可以通过以下方式获取值
corpus = [' '.join(v) for k, v in word.to_dict().items()]
print(corpus)
['a b c d', 'b c', 'c d', 'a b c']