具有不同数据的Tensorflow重训练神经网络

时间:2018-12-18 15:19:00

标签: python tensorflow keras neural-network deep-learning

例如,我有一个inputs到神经网络的列表

list_of_inputs = [inputs1, inputs2, inputs3, ... ,inputsN]

*以及相应的标签列表*

list_of_labels = [label1, label2, label3, ..., labelN]

我想将每对input,label馈入/训练到神经网络中,记录损失,然后在同一网络上训练下一对input,label,并记录所有损失,依此类推input,label对。

注意:我不想每次添加新的input,label时都重新初始化权重,我想使用前一对中训练后的权重。网络如下所示(您可以在此处看到我也在打印损失)。我该怎么办?

with tf.name_scope("nn"):
    model = tf.keras.Sequential([
        tfp.layers.DenseFlipout(64, activation=tf.nn.relu),
        tfp.layers.DenseFlipout(64, activation=tf.nn.softmax),
        tfp.layers.DenseFlipout(np.squeeze(labels).shape[0])
    ])

logits = model(inputs)
loss = tf.reduce_mean(tf.square(labels - logits))
train_op_bnn = tf.train.AdamOptimizer().minimize(loss)


init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())

with tf.Session() as sess:
    sess.run(init_op)
    for i in range(100):   
        sess.run(train_op_bnn)
        print(sess.run(loss))

编辑:

问题是,当我尝试使用以下功能格式化网络时:

init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())

with tf.Session() as sess:
    sess.run(init_op)

    inputs,labels = MEMORY[0]

    logits, model_losses = build_graph(inputs)
    loss = tf.reduce_mean(tf.square(labels - logits))
    train_op_bnn = tf.train.AdamOptimizer().minimize(loss)

    sess.run(train_op_bnn)
    print(sess.run(loss))   

我得到一个错误:

FailedPreconditionError                   Traceback (most recent call last)
<ipython-input-95-5ca77fa0606a> in <module>()
     36     train_op_bnn = tf.train.AdamOptimizer().minimize(loss)
     37 
---> 38     sess.run(train_op_bnn)
     39     print(sess.run(loss))
     40 

1 个答案:

答案 0 :(得分:1)

logits, model_losses = build_graph(inputs)
loss = tf.reduce_mean(tf.square(labels - logits))
train_op_bnn = tf.train.AdamOptimizer().minimize(loss)

应该在

上方
with tf.Session() as sess:

并且在您的init_op定义之上