实施对数似然损失为负的简单概率模型

时间:2019-11-24 22:23:51

标签: python tensorflow deep-learning unsupervised-learning

首先一个快速的免责声明是,我先在深度学习和学习机器学习中将这个问题发布在Reddit上,但我想我也可能会在这里要求您的专业知识。事不宜迟:

今年Deep Unsupervised Learning Course of Berkeley University我正在挑战自己,尽管我刚刚开始第1周的热身运动,但我已经遇到了“技术”难题。

所讨论的练习是以下文档中的“ 1.预热”:Week 1 Exercises。 (我很抱歉,我对Reddit格式不够熟悉,以至于似乎没有包含图像。

据我所知,我们有一个变量x,它可以从1..100中获取值,该值具有特定的采样概率(在sample_data()函数中定义)。 因此,任务是拟合传递给softmax函数的参数theta的向量,并假定给定特定元素x_i被采样的可能性。即,theta_1应该是“凸显”与变量x = 1对应的soft-max值的参数,等等。

使用Tensorflow,我认为我能够创建这样的模型,但是在培训方面,我相信我缺少一个关键点,因为程序无法计算关于theta参数的梯度。 / p>

我想知道是否不是对任务的误解,以及是否有更好的方法来实现练习的结果。

这是代码,其中失败的par位于# Computing gradients中。

import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp

if __name__ == "__main__":
    # Sampling function of the x variable provided in the exercise
    def sample_data():
        count = 10000
        rand = np.random.RandomState(0)
        a = 0.3 + 0.1 * rand.randn(count)
        b = 0.8 + 0.05 * rand.randn(count)
        mask = rand.rand(count) < 0.5
        samples = np.clip(a * mask + b * (1 - mask), 0.0, 1.0)
        return np.digitize(samples, np.linspace(0.0, 1.0, 100))

    full_data = sample_data()
    train_ds = full_data[:int(.8*len( full_data))]
    val_ds = full_data[int(.8*len( full_data)):]

    # Declaring parameters theta
    w_init = tf.zeros_initializer()
    params = tf.Variable(
        initial_value=w_init(shape=(1, 100),
        dtype='float32'), trainable=True, name='params')


    softmax = tf.squeeze( tf.nn.softmax( params, axis=1))

    #Should materialize the loss of the model
    def get_neg_log_likelihood( inputs):
        return - tf.math.log( softmax)

    neg_log_likelihoods = get_neg_log_likelihood( softmax)

    dist = tfp.distributions.Categorical( probs=softmax, dtype=tf.int32)

    optimizer = tf.keras.optimizers.Adam()

    for epoch in range( 100):
        minibatch_size = 200
        n_minibatches = len( train_ds) // minibatch_size

        # Running over minibatches of the data
        for minibatch in range( n_minibatches):
            # Minibatching
            start_index = (minibatch*minibatch_size)
            end_index = (minibatch_size*minibatch + minibatch_size)

            x = train_ds[start_index:end_index]

            with tf.GradientTape() as tape:
                tape.watch( params)
                loss = tf.reduce_mean( - dist.log_prob( x))

            # Computing gradients
            grads = tape.gradient( loss, params)
            print( grads) # Result: None
            # input()
            optimizer.apply_gradients( zip( grads, params))

提前感谢您的宝贵时间。

PS:我主要是在深度强化学习方面有背景知识,因此我可以了解那里使用的各种模型(策略,价值函数...),但是我试图对模型本身的内部结构有所了解,即生成概率模型(GAN,VAE)和其他一般无监督的学习模型(RealNVP,Norm Flows,...)

1 个答案:

答案 0 :(得分:0)

肯定没有人会看到这个,但是我想我也应该对此做些封闭。

首先,我通过从soft-max值的负对数似然直接推导其表达式来计算梯度,因此在同一情况下放弃了Tensorflow框架。

尽管结果有点出乎我的意料,但该程序仍能够使模型拟合到某种程度上类似于采样数据的经验分布的分布。我猜这是由于一个事实,即一维theta参数向量不足以完全模拟真实数据分布以及有限数量的采样数据。

代码的更新版本:

import numpy as np
from matplotlib import pyplot as plt

np.random.seed( 42)

def softmax(X, theta = 1.0, axis = None):
    # Shamefull copy paste from SO
    y = np.atleast_2d(X)
    if axis is None:
        axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
    y = y * float(theta)
    y = y - np.expand_dims(np.max(y, axis = axis), axis)
    y = np.exp(y)
    ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
    p = y / ax_sum
    if len(X.shape) == 1: p = p.flatten()

    return p

if __name__ == "__main__":
    def sample_data():
        count = 10000
        rand = np.random.RandomState(0)
        a = 0.3 + 0.1 * rand.randn(count)
        b = 0.8 + 0.05 * rand.randn(count)
        mask = rand.rand(count) < 0.5
        samples = np.clip(a * mask + b * (1 - mask), 0.0, 1.0)
        return np.digitize(samples, np.linspace(0.0, 1.0, 100))

    full_data = sample_data()
    train_ds = full_data[:int(.8*len( full_data))]
    val_ds = full_data[int(.8*len( full_data)):]

    # Declaring parameters
    params = np.zeros(100)

    # Use for loss computation
    def get_neg_log_likelihood( softmax):
        return - np.log( softmax)

    def get_loss( params, x):
        return np.mean( [get_neg_log_likelihood( softmax( params))[i-1] for i in x])

    lr = .0005

    for epoch in range( 1000):
        # Shuffling training data
        np.random.shuffle( train_ds)

        minibatch_size = 100
        n_minibatches = len( train_ds) // minibatch_size

        # Running over minibatches of the data
        for minibatch in range( n_minibatches):
            smax = softmax( params)

            # Jacobian of neg log likelishood
            jacobian = [[ smax[j] - 1 if i == j else
                smax[j] for j in range(100)] for i in range(100)]

            # Minibatching
            start_index = (minibatch*minibatch_size)
            end_index = (minibatch_size*minibatch + minibatch_size)

            x = train_ds[start_index:end_index]

            # Compute the gradient matrix for each sample data and mean over it
            grad_matrix = np.vstack( [jacobian[i] for i in x])
            grads = np.sum( grad_matrix, axis=0)

            params -= lr * grads

        print( "Epoch %d -- Train loss: %.4f , Val loss: %.4f" %(epoch, get_loss( params, train_ds), get_loss( params, val_ds)))

        # Plotting each ~100 epochs
        if epoch % 100 == 0:
            counters = { i+1: 0 for i in range(100)}
            for x in full_data:
                counters[x]+= 1

            histogram = np.array( [ counters[i+1] / len( full_data) for i in range( 100)])
            fsmax = softmax( params)

            fig, ax = plt.subplots()
            ax.set_title('Dist. Comp. after %d epochs of training (from scratch)' % epoch)
            x = np.arange( 1,101)
            width = 0.35
            rects1 = ax.bar(x - width/2, fsmax, width, label='Model')
            rects2 = ax.bar(x + width/2, histogram, width, label='Empirical')
            ax.set_ylabel('Likelihood')
            ax.set_xlabel('Variable x\s values')
            ax.legend()

            def autolabel(rects):
                for rect in rects:
                    height = rect.get_height()

            autolabel(rects1)
            autolabel(rects2)

            fig.tight_layout()
            plt.savefig( 'plots/results_after_%d_epochs.png' % epoch)

为了完整起见,提供了最终模型分布的图片。 Modeled vs Empirical Distribution