香农熵的导数是什么?

时间:2019-10-15 22:24:25

标签: python numpy data-science derivative entropy

我有以下简单的python函数,可根据Shannon的信息论来计算单个输入X的熵:

import numpy as np

def entropy(X:'numpy array'):
  _, frequencies = np.unique(X, return_counts=True)
  probabilities  = frequencies/X.shape[0]
  return -np.sum(probabilities*np.log2(probabilities))

a = np.array([1., 1., 1., 3., 3., 2.])
b = np.array([1., 1., 1., 3., 3., 3.])
c = np.array([1., 1., 1., 1., 1., 1.])

print(f"entropy(a): {entropy(a)}")
print(f"entropy(b): {entropy(b)}")
print(f"entropy(c): {entropy(c)}")

输出如下:

entropy(a): 1.4591479170272446
entropy(b): 1.0
entropy(c): -0.0

但是,我还需要计算dx上的导数:

  

d熵/ dx

由于主要公式,这不是一件容易的事

  

-np.sum(概率* np.log2(概率))

采用probabilities而不是x的值,因此尚不清楚如何区分dx

有人对此有想法吗?

1 个答案:

答案 0 :(得分:1)

解决此问题的一种方法是使用finite differences对数值进行导数计算。

在这种情况下,我们可以定义一个小的常数来帮助我们计算数值导数。此函数采用单参数函数,并为输入x计算其导数:

ε = 1e-12
def derivative(f, x):
    return (f(x + ε) - f(x)) / ε

为了简化工作,让我们定义一个函数来计算熵的最深层运算:

def inner(x):
    return x * np.log2(x)

回想总和的导数是导数的和。因此,真正的导数计算在我们刚刚定义的inner函数中进行。

因此,熵的数值导数为:

def numerical_dentropy(X):
    _, frequencies = np.unique(X, return_counts=True)
    probabilities = frequencies / X.shape[0]
    return -np.sum([derivative(inner, p) for p in probabilities])

我们可以做得更好吗?当然,我们可以!这里的关键见解是乘积规则:(f g)' = fg' + gf',其中f=xg=np.log2(x)。 (还要注意d[log_a(x)]/dx = 1/(x ln(a))。)

因此,解析熵可以计算为:

import math
def dentropy(X):
    _, frequencies = np.unique(X, return_counts=True)
    probabilities = frequencies / X.shape[0]
    return -np.sum([(1/math.log(2, math.e) + np.log2(p)) for p in probabilities])

使用样本向量进行测试,我们有:

a = np.array([1., 1., 1., 3., 3., 2.])
b = np.array([1., 1., 1., 3., 3., 3.])
c = np.array([1., 1., 1., 1., 1., 1.])

print(f"numerical d[entropy(a)]: {numerical_dentropy(a)}")
print(f"numerical d[entropy(b)]: {numerical_dentropy(b)}")
print(f"numerical d[entropy(c)]: {numerical_dentropy(c)}")

print(f"analytical d[entropy(a)]: {dentropy(a)}")
print(f"analytical d[entropy(b)]: {dentropy(b)}")
print(f"analytical d[entropy(c)]: {dentropy(c)}")

哪个在执行时会给我们:

numerical d[entropy(a)]: 0.8417710972707937
numerical d[entropy(b)]: -0.8854028621385623
numerical d[entropy(c)]: -1.4428232973189605
analytical d[entropy(a)]: 0.8418398787754222
analytical d[entropy(b)]: -0.8853900817779268
analytical d[entropy(c)]: -1.4426950408889634

作为奖励,我们可以使用automatic differentiation库测试这是否正确:

import torch

a, b, c = torch.from_numpy(a), torch.from_numpy(b), torch.from_numpy(c)

def torch_entropy(X):
    _, frequencies = torch.unique(X, return_counts=True)
    frequencies = frequencies.type(torch.float32)
    probabilities = frequencies / X.shape[0]
    probabilities.requires_grad_(True)
    return -(probabilities * torch.log2(probabilities)).sum(), probabilities

for v in a, b, c:
    h, p = torch_entropy(v)
    print(f'torch entropy: {h}')
    h.backward()
    print(f'torch derivative: {p.grad.sum()}')

哪个给我们:

torch entropy: 1.4591479301452637
torch derivative: 0.8418397903442383
torch entropy: 1.0
torch derivative: -0.885390043258667
torch entropy: -0.0
torch derivative: -1.4426950216293335