分布落后的毕业检查不匹配数字再现?

时间:2019-05-14 12:19:41

标签: python statistics pytorch autograd

我已经基于负二项分布概率质量函数实现了损失。不幸的是,通过计算自己的数值表达式,我无法验证pytorch向后方法给我的结果。

我试图通过重现渐变wrt来验证pytorch的实现。使用引用来源中的表达式来分配分布的参数。 所获得的值不匹配。我不确定是否是我的损失函数实现有问题,因为我相信数值再现是正确的。

来源:

source 1 - (Negative Binomial distribution PMF and likelihood gradient)

source 2 - (Negative Binomial distribution PMF and likelihood gradient)

带有测试和打印的代码段如下:

import torch
from scipy.special import digamma
import numpy as np

output_soft = torch.tensor([[10], [0.25]], requires_grad=True)

def loss(input, target):
    total_count = input[0]
    probability = input[1]
    target_p_tc_gamma = torch.tensor([target + total_count], dtype=torch.float, requires_grad=True).lgamma()
    r_gamma = total_count.lgamma()
    target_factorial = torch.tensor([target + 1],dtype=torch.float).lgamma()
    combinatorial_term = torch.tensor([target_p_tc_gamma - r_gamma - target_factorial],
                                      dtype=torch.float,
                                      requires_grad=True).exp()
    prob_term = probability.pow(total_count)
    comp_prob_term = torch.tensor([1 - probability], dtype=torch.float, requires_grad=True).pow(target)
    likelihood_target = combinatorial_term * prob_term * comp_prob_term
    return -likelihood_target.log()


target = torch.tensor([15.])
loss = loss(output_soft, target)
loss.backward()
print("Backward gradient inspection", output_soft.grad.detach().numpy())

def neg_PMF_gradient_check(input, target):
    target = target.detach().numpy()
    total_count = input[0].detach().numpy()
    probability = input[1].detach().numpy()
    dg = digamma
    comp_prob = 1-probability
    grad_tc = dg(target + total_count) - dg(total_count) + np.log(total_count / (total_count + target))
    grad_prob=target / probability - total_count / comp_prob
    return [-grad_tc[0], -grad_prob[0]]

print("Numerical reproduction", neg_PMF_gradient_check(output_soft, target))

两种方法得到的打印结果应该相等,但是我得出的结果却截然不同。有什么不对的猜测吗?预先谢谢你

0 个答案:

没有答案