Tensorflow概率返回不稳定的预测

时间:2019-03-19 15:34:48

标签: tensorflow tensorflow-probability

我正在使用Tensorflow概率模型。当然是概率结果,误差的导数不会为零(否则该模型将是确定性的)。预测不稳定,因为例如,在凸优化中,损失的导数范围在1.2到0.2之间。

每次训练模型时,此间隔都会生成不同的预测。有时候我会很适合(红色=真实,蓝色线条=预测的+2 std偏差和-2 std偏差):

Good fit

有时不,具有相同的超参数:

Bad fit

有时是镜像的:

Mirrored

出于商业目的,鉴于预计预测将显示稳定的输出,因此这是个问题。

代码如下:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
np.random.seed(42)
dataframe = pd.read_csv('Apple_Data_300.csv').ix[0:800,:]
dataframe.head()

plt.plot(range(0,dataframe.shape[0]),dataframe.iloc[:,1])

x1=np.array(dataframe.iloc[:,1]+np.random.randn(dataframe.shape[0])).astype(np.float32).reshape(-1,1)

y=np.array(dataframe.iloc[:,1]).T.astype(np.float32).reshape(-1,1)

tfd = tfp.distributions

model = tf.keras.Sequential([
  tf.keras.layers.Dense(1,kernel_initializer='glorot_uniform'),
  tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
  tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
  tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1))
])
negloglik = lambda x, rv_x: -rv_x.log_prob(x)

model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.0001), loss=negloglik)

model.fit(x1,y, epochs=500, verbose=True)

yhat = model(x1)
mean = yhat.mean()

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    mm = sess.run(mean)    
    mean = yhat.mean()
    stddev = yhat.stddev()
    mean_plus_2_std = sess.run(mean - 2. * stddev)
    mean_minus_2_std = sess.run(mean + 2. * stddev)


plt.figure(figsize=(8,6))
plt.plot(y,color='red',linewidth=1)
#plt.plot(mm)
plt.plot(mean_minus_2_std,color='blue',linewidth=1)
plt.plot(mean_plus_2_std,color='blue',linewidth=1)

损失:

Epoch 498/500
801/801 [==============================] - 0s 32us/sample - loss: 2.4169
Epoch 499/500
801/801 [==============================] - 0s 30us/sample - loss: 2.4078
Epoch 500/500
801/801 [==============================] - 0s 31us/sample - loss: 2.3944

是否可以控制概率模型的预测输出?损失停止在1.42,甚至降低了学习速度并增加了训练时间。我在这里想念什么?

回答后的工作代码:

init = tf.global_variables_initializer()

with tf.Session() as sess:

    model = tf.keras.Sequential([
      tf.keras.layers.Dense(1,kernel_initializer='glorot_uniform'),
      tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1))
    ])
    negloglik = lambda x, rv_x: -rv_x.log_prob(x)

    model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.0001), loss=negloglik)

    model.fit(x1,y, epochs=500, verbose=True, batch_size=16)

    yhat = model(x1)
    mean = yhat.mean()

    sess.run(init)
    mm = sess.run(mean)    
    mean = yhat.mean()
    stddev = yhat.stddev()
    mean_plus_2_std = sess.run(mean - 3. * stddev)
    mean_minus_2_std = sess.run(mean + 3. * stddev)

1 个答案:

答案 0 :(得分:2)

您运行tf.global_variables_initializer太晚了吗?

我在Understanding tf.global_variables_initializer的答案中发现了这一点:

  

变量初始化程序必须在您的其他操作之前显式运行   模型可以运行。最简单的方法是添加运行的操作   所有变量初始值设定项,并在使用模型之前运行该op。