IP包信息的熵

时间:2014-12-11 20:46:02

标签: python probability entropy

我的.csv文件中包含了完整的数据包标题信息。第一行很少:

28;03/07/2000;11:27:51;00:00:01;8609;4961;8609;097.139.024.164;131.084.001.031;0;-
29;03/07/2000;11:27:51;00:00:01;29396;4962;29396;058.106.180.191;131.084.001.031;0;-
30;03/07/2000;11:27:51;00:00:01;26290;4963;26290;060.075.194.137;131.084.001.031;0;-
31;03/07/2000;11:27:51;00:00:01;28324;4964;28324;038.087.169.169;131.084.001.031;0;- 

总体上约有~33k行(每行是来自不同数据包头的信息)。现在我需要使用源地址和目标地址来计算熵。

使用我编写的代码:

def openFile(file_name):
    srcFile = open(file_name, 'r')
    enter code heredataset = []
    for line in srcFile:
        newLine = line.split(";")
        dataset.append(newLine)
    return dataset

我收到的回报看起来像是

dataset = [
    ['28', '03/07/2000', '11:27:51', '00:00:01', '8609', '4961', '8609', '097.139.024.164', '131.084.001.031', '0', '-\n'], 
    ['29', '03/07/2000', '11:27:51', '00:00:01', '29396', '4962', '29396', '058.106.180.191', '131.084.001.031', '0', '-\n'], 
    ['30', '03/07/2000', '11:27:51', '00:00:01', '26290', '4963', '26290', '060.075.194.137', '131.084.001.031', '0', '-\n'],
    ['31', '03/07/2000', '11:27:51', '00:00:01', '28324', '4964', '28324', '038.087.169.169', '131.084.001.031', '0', '-']
]

我将它传递给我的熵函数:

#---- Entropy += - prob * math.log(prob, 2) ---------
def Entropy(data):
    entropy = 0
    counter = 0 # -- counter for occurances of the same ip address
    #-- For loop to iterate through every item in outer list
    for item in range(len(data)):
        #-- For loop to iterate through inner list
        for x in data[item]:
            if x == data[item][8]: 
                counter += 1
        prob = float(counter) / len(data)
        entropy += -prob * math.log(prob, 2)
    print("\n")
    print("Entropy: {}".format(entropy))

代码运行没有任何错误,但它给出了不好的熵,我觉得这是因为错误的概率计算(第二个循环是可疑的)或糟糕的熵公式。有没有办法找到没有另一个循环的IP出现概率?欢迎任何代码编辑

1 个答案:

答案 0 :(得分:3)

使用numpy和内置collections模块可以大大简化代码:

import numpy as np
import collections

sample_ips = [
    "131.084.001.031",
    "131.084.001.031",
    "131.284.001.031",
    "131.284.001.031",
    "131.284.001.000",
]

C = collections.Counter(sample_ips)
counts  = np.array(C.values(),dtype=float)
prob    = counts/counts.sum()
shannon_entropy = (-prob*np.log2(prob)).sum()
print (shannon_entropy)