Tensorflow:使变量可训练

时间:2018-01-22 09:47:40

标签: python tensorflow

我想优化一个变量(负二项分布参数),用一些最大矩量估计器预先初始化:

sample_data = tf.placeholder(tf.float32)
(r, p) = fit_mme(sample_data) # pre-calculation

# The important part: make `r` trainable
r = tf.Variable(r, dtype=tf.float32, name="r")



#####################################################
# to clarify: from here training of `r`

mu = tf.reduce_mean(sample_data, axis=0, name="mu")
p = mu / (r + mu)
p = tf.identity(p, "p")

distribution = tf.contrib.distributions.NegativeBinomial(total_count=r,
                                    probs=p,
                                    name="nb-dist")
probs = distribution.log_prob(sample_data)
# minimize negative log probability
loss = -tf.reduce_sum(probs, name="loss")

train_op = tf.train.AdamOptimizer(learning_rate=0.05)
train_op = train_op.minimize(loss, global_step=tf.train.get_global_step())

errors = []
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for i in range(10000):
        (probs_res, loss_res, _) = \
            sess.run((probs, loss, train_op), feed_dict={sample_data: x})
        errors.append(loss_res)
        print(i)

    r_estim = probs_res.total_count

fit_mme:

def fit_mme(sample_data, replace_values=None, name=None):
"""
    Calculates the Maximum-of-Momentum Estimator of `NB(r, p)` for given sample data along axis 0.

    :param sample_data: matrix containing samples for each distribution on axis 0\n
        E.g. `(N, M)` matrix with `M` distributions containing `N` observed values each
    :param replace_values: Matrix of size `shape(sample_data)[1:]`
    :param name: A name for the operation (optional).
    :return: estimated values of `r` and `p`
    """
    with tf.name_scope(name, "MME"):
        mean = tf.reduce_mean(sample_data, axis=0, name="mean")
        variance = tf.reduce_mean(tf.square(sample_data - mean),
                              axis=0,
                              name="variance")
        if replace_values is None:
            replace_values = tf.fill(tf.shape(variance), math.nan, name="NaN_constant")

        r_by_mean = tf.where(tf.less(mean, variance),
                         mean / (variance - mean),
                         replace_values)
        r = r_by_mean * mean
        r = tf.identity(r, "r")

        p = 1 / (r_by_mean + 1)
        p = tf.identity(p, "p")

        return r, p

但是,我收到以下错误:

  

ValueError:initial_value必须具有指定的形状:   张量(" MME / r:0",dtype = float32)

是否有更好/更干净的解决方案让r可以训练?

2 个答案:

答案 0 :(得分:0)

检查public static RequestBody toRequestBody(String value) { return RequestBody.create(MediaType.parse("text/plain"), value); } 的形状。它应该很明确。

我怀疑你有* /无/?对于第一个维度,允许您使用不同大小的批次。

使用r修复形状,或使用tf.reshape或类似内容折叠第一个尺寸。

答案 1 :(得分:0)

很多事情我都不清楚。

为什么你“打破”图表:

sample_data = tf.placeholder(tf.float32)
r, p = fit_mme(sample_data) # pre-calculation

# The important part: make `r` trainable
r_var = tf.Variable(r, dtype=tf.float32, name="r")

r已经是一个可以更新的Tensor,因此您无需将其值输入变量。特别是如果你不想对它进行任何numpy处理。

但是,如果你真的想这样做,你可以这样做:

x = tf.placeholder(tf.float32, shape=[None, 2])

r, p = fit_mme(x) # pre-calculation
r_var = tf.Variable(tf.zeros(tf.shape(r)), dtype=tf.float32, validate_shape=False, name="r_var")

r_assign_op = tf.assign(r_var, r)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    #print (sess.run(r_var, feed_dict={x: [[3.4, 4.6], [5.4, 6.8]]}))
    print ("r_var before assign: %s" % sess.run(r_var))

    r_var_t, _ = sess.run([r_var, r_assign_op], feed_dict={x: [[3.4, 4.6], [5.4, 6.8]]})
    print ("r_var after assign: %s" % r_var_t)