Replicating results in tensorflow estimators

时间:2019-04-17 01:11:02

标签: tensorflow tensorflow-estimator

I am building a classifier using tensorflow estimators.

But I am getting two separate results for the same experiment if run multiple times.

These experiments have all the same parameters starting from the batches of data to the architecture of the network.

I set the random seed in the config of the estimator as well.

Is there any other random seed that I am not aware of?

I have checked the data side of the experiment, I am making sure of sending the same batches and in the same sequence in these experiments.

The training curve looks close of each other but the validation curve is way off. As you can see in the image below. These are the loss curve for two same experiments, I ran them on the same dataset creating same batches at each step

enter image description here

This code tells how I set the random seed of tensorflow estimators.

config = tf.estimator.RunConfig(save_summary_steps=t.save_summary_steps,
                                    log_step_count_steps=t.log_step_count_steps,
                                    save_checkpoints_steps=t.save_checkpoints_steps,
                                    keep_checkpoint_max=t.keep_checkpoint_max,
                                    tf_random_seed=t.random_number
                                    )

0 个答案:

没有答案
相关问题