我有一个有趣的问题,我想解决。
所以我有一个辍学的DCNN,通常只在训练期间使用。但我想做的是在测试阶段应用辍学。
现在每批次都可以轻松完成一次,但现在我想每批次做N次同样的事情,其中一步一步,掉落的神经元会发生变化。
目标是使用dropout对网络进行采样,以确定其确定性。这是我正在思考的伪代码
def test():
# Get shuffle_batch
features, labels, nBatchesPerEpoch = get_feature_and_labels(batchSize=10)
isTraining = tf.placeholder(tf.bool, shape=(), name='isTraining')
dropOut = tf.placeholder(tf.bool, shape=(), name='dropout')
# Create network
neuralNet = DCNN(features,labels,dropout,isTraining)
# Output from neural net
logits = neuralNet.logits
with tf.Session() as sess:
# Loop through whole batch
for _ in range(nBatchesPerEpoch):
# Here do some fancy stuff so that the same minibatch is used, but dropouts change
logits_sum = 0.0
for _ in range(20): # Do 20 samples from each minibatch
logits_sum += sess.run(logits,feed_dict={isTraining: True, dropOut:0.5}
# Then here I work with the generated logits
DoSomeStuff(logits_sum/20)
希望我能让自己清楚明白:)
我认为可能感兴趣的方向是使用tf.identity?