我有一个文本文件数据集,其中包含1000万行,如下所示:
0.96990 0.93395 0.19632 0.89671 0.95208 1.0
0.17506 0.97717 0.30871 0.63547 0.19103 0.0
我已经通过Python将它量化为10级,现在它如下所示:
9 9 1 8 9 1
1 9 3 6 1 0
8 3 8 4 4 1
0 2 1 9 9 0
这意味着组件(9 9 1 8 9)属于第1类。我想找到每个要素(列)的熵。 我编写了以下代码,但它有很多错误:
import pandas as pd
import os
import math
os.chdir(r'C:\\Users\\amtol\\Desktop\\M.L\\Datasets') """changing
directory"""
os.getcwd() """ getting directory to make sure about directory"""
f = open ( 'data1.txt' , 'r') """Reading File"""
"""finding the probability"""
df = pd.DataFrame(pd.read_csv(f, sep='\t', header=None, names=['val1',
'val2', 'val3', 'val4','val5', 'val6', 'val7', 'val8']))
print(df)
df.loc[:,"val1":"val5"] = df.loc[:,"val1":"val5"].div(df.sum(axis=0),
axis=1)
print(df)
"""Calculating Entropy"""
def shannon(col):
entropy = - sum([ p * math.log(p) / math.log(2.0) for p in col])
return entropy
sh_df = df.loc[:,'val1':'val5'].apply(shannon,axis=0)
print(sh_df)
你可以更正我的代码,或者你知道在Python中找到数据集每列的熵的任何函数吗?
答案 0 :(得分:1)
您可以使用以下脚本在pandas中找到列的熵
import numpy as np
from scipy.stats import entropy
from math import log, e
import pandas as pd
""" Usage: pandas_entropy(df['column1']) """
def pandas_entropy(column, base=None):
vc = pd.Series(column).value_counts(normalize=True, sort=False)
base = e if base is None else base
return -(vc * np.log(vc)/np.log(base)).sum()
只需为每列运行先前的函数,它将返回每个熵。