我对Tensorflow MNIST教程做了一些改动。 原始代码(fully_connected_feed.py,第194-202行):
@echo off
pushd %~dp0
IF EXIST "%~dp0signVerification.log" echo Cert renewed and successfully signed once for this TFS job already&&exit 0
IF NOT EXIST .\renew_certificate.bat echo missing renew_certificate.bat&&exit 1
SETLOCAL EnableDelayedExpansion EnableExtensions
FOR /L %%T IN (1,1,5) DO (
call %~dp0renew_certificate.bat
IF NOT "!passed!"=="true" IF "!errorlevel!"=="0" Set passed=true&&exit /b 0
IF NOT "!passed!"=="true" echo Re-trying signing iteration %%T && call ping 127.0.0.1 -n 61 > nul
)
IF NOT "%passed%"=="true" echo Signing did not pass && exit /b 1
exit /b 0
我只是添加了一个评估:
checkpoint_file = os.path.join(FLAGS.log_dir, 'model.ckpt')
saver.save(sess, checkpoint_file, global_step=global_step)
#Evaluate against the training set.
print('Training Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.train)
此评估的结果非常接近,但不相同(数字因发布而异):
checkpoint_file = os.path.join(FLAGS.log_dir, 'model.ckpt')
saver.save(sess, checkpoint_file, global_step=global_step)
print('Something strange:')
do_eval(sess, eval_correct, images_placeholder,labels_placeholder,
data_sets.train)
#Evaluate against the training set.
print('Training Data Eval:')
do_eval(sess,
eval_correct,
images_placeholder,
labels_placeholder,
data_sets.train)
怎么可能? UPD:添加了到tensorflow github的链接: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/mnist
答案 0 :(得分:3)
do_eval()
函数实际上确实有副作用,因为data_sets.train
是stateful DataSet
object,其中包含当前_index_in_epoch
成员,每次调用DataSet.next_batch()
{3}}(即DataSet.next_batch()
)。
就其本身而言,这个事实不足以给出非确定性结果,但还有另外两个关于num_examples % batch_size
导致非确定性的细节:
每次启动新纪元时,示例均为fill_feed_dict()
。
当数据集randomly shuffled时,数据集重置为开始,最后的DataSet
示例将被丢弃。由于随机改组,每次都会丢弃一组随机的子示例,导致不确定的结果。
考虑到代码的结构方式(在训练和测试之间共享DataSet
),使代码具有确定性是很棘手的。
for i=2 to n
j=3n
while j>=1 do
j=j/3
类的记录很少,但这种行为令人惊讶,所以我会考虑reaches the end of an epoch这个问题。