为什么我的结果仍然无法重现?

时间:2019-10-17 13:14:00

标签: tensorflow keras conv-neural-network google-colaboratory reproducible-research

我想获得CNN的可重复结果。我将Keras和Google Colab与GPU配合使用。

除了建议插入某些代码段(应具有可重复性)的建议之外,我还向这些层添加了种子。

###### This is the first code snipped to run #####

!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials

# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
###### This is the second code snipped to run #####

from __future__ import print_function  
import numpy as np 

import tensorflow as tf
print(tf.test.gpu_device_name())

import random as rn 
import os 
os.environ['PYTHONASHSEED'] = '0' 
np.random.seed(1)   
rn.seed(1)   
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) 

###### This is the third code snipped to run #####

from keras import backend as K

tf.set_random_seed(1) 
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)  
K.set_session(sess)   
###### This is the fourth code snipped to run #####

def model_cnn():
  model = Sequential()
  model.add(Conv2D(32, kernel_size=(3,3), kernel_initializer=initializers.glorot_uniform(seed=1), input_shape=(28,28,1)))
  model.add(BatchNormalization())
  model.add(Activation('relu'))

  model.add(Conv2D(32, kernel_size=(3,3), kernel_initializer=initializers.glorot_uniform(seed=2)))
  model.add(BatchNormalization())
  model.add(Activation('relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(Dropout(0.25, seed=1))  

  model.add(Flatten())

  model.add(Dense(512, kernel_initializer=initializers.glorot_uniform(seed=2)))
  model.add(BatchNormalization())
  model.add(Activation('relu'))
  model.add(Dropout(0.5, seed=1))
  model.add(Dense(10, kernel_initializer=initializers.glorot_uniform(seed=2)))
  model.add(Activation('softmax'))

  model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['accuracy'])
  return model


def split_data(X,y):
  X_train_val, X_val, y_train_val, y_val = train_test_split(X, y, random_state=42, test_size=1/5, stratify=y) 
  return(X_train_val, X_val, y_train_val, y_val) 


def train_model_with_EarlyStopping(model, X, y):
  # make train and validation data
  X_tr, X_val, y_tr, y_val = split_data(X,y)

  es = EarlyStopping(monitor='val_loss', patience=20, mode='min', restore_best_weights=True)

  history = model.fit(X_tr, y_tr,
                      batch_size=64,
                      epochs=200, 
                      verbose=1,
                      validation_data=(X_val,y_val),
                      callbacks=[es])    

  return history
###### This is the fifth code snipped to run #####

train_model_with_EarlyStopping(model_cnn(), X, y)

总是运行上面的代码,我得到不同的结果。 原因是在代码中,还是在支持GPU的Google Colab中根本无法获得可重复的结果?


完整的代码(代码中有不必要的部分,例如未使用的库):

!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
from __future__ import print_function # NEU 
import numpy as np 

import tensorflow as tf
import random as rn 
import os 
os.environ['PYTHONASHSEED'] = '0' 
np.random.seed(1)   
rn.seed(1)   
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) 
from keras import backend as K

tf.set_random_seed(1)  
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)   
K.set_session(sess)  

import os
local_root_path = os.path.expanduser("~/data/data")
print(local_root_path)
try:
  os.makedirs(local_root_path, exist_ok=True)  
except: pass

def ListFolder(google_drive_id, destination):
  file_list = drive.ListFile({'q': "'%s' in parents and trashed=false" % google_drive_id}).GetList()
  counter = 0
  for f in file_list:
    # If it is a directory then, create the dicrectory and upload the file inside it
    if f['mimeType']=='application/vnd.google-apps.folder': 
      folder_path = os.path.join(destination, f['title'])
      os.makedirs(folder_path, exist_ok=True)
      print('creating directory {}'.format(folder_path))
      ListFolder(f['id'], folder_path)
    else:
      fname = os.path.join(destination, f['title'])
      f_ = drive.CreateFile({'id': f['id']})
      f_.GetContentFile(fname)
      counter += 1
  print('{} files were uploaded in {}'.format(counter, destination))
ListFolder("1DyM_D2ZJ5UHIXmXq4uHzKqXSkLTH-lSo", local_root_path)

import glob
import h5py
from time import time
from keras import initializers 
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, model_from_json
from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization, merge
from keras.layers import Convolution2D, MaxPooling2D, AveragePooling2D
from keras.optimizers import SGD, Adam, RMSprop, Adagrad, Adadelta, Adamax, Nadam
from keras.utils import np_utils
from keras.callbacks import LearningRateScheduler, ModelCheckpoint, TensorBoard, ReduceLROnPlateau
from keras.regularizers import l2
from keras.layers.advanced_activations import LeakyReLU, ELU
from keras import backend as K
import numpy as np
import pickle as pkl
from matplotlib import pyplot as plt
%matplotlib inline
import gzip
import numpy as np
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
from keras.datasets import fashion_mnist
from numpy import mean, std
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold, StratifiedKFold
from keras.datasets import fashion_mnist
from keras.utils import to_categorical
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten
from keras.optimizers import SGD, Adam
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import auc, average_precision_score, f1_score

import time
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from google.colab import files
from PIL import Image 



def model_cnn():
  model = Sequential()
  model.add(Conv2D(32, kernel_size=(3,3), kernel_initializer=initializers.glorot_uniform(seed=1), input_shape=(28,28,1)))
  model.add(BatchNormalization())
  model.add(Activation('relu'))
  model.add(Conv2D(32, kernel_size=(3,3), kernel_initializer=initializers.glorot_uniform(seed=2)))
  model.add(BatchNormalization())
  model.add(Activation('relu'))
  model.add(MaxPooling2D(pool_size=(2,2)))
  model.add(Dropout(0.25, seed=1))  
  model.add(Flatten())
  model.add(Dense(512, kernel_initializer=initializers.glorot_uniform(seed=2)))
  model.add(BatchNormalization())
  model.add(Activation('relu'))
  model.add(Dropout(0.5, seed=1))
  model.add(Dense(10, kernel_initializer=initializers.glorot_uniform(seed=2)))
  model.add(Activation('softmax'))
  model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=0.001), metrics=['accuracy'])
  return model

def train_model_with_EarlyStopping(model, X, y):
  X_tr, X_val, y_tr, y_val = split_train_val_data(X,y)
  es = EarlyStopping(monitor='val_loss', patience=20, mode='min', restore_best_weights=True)      
  history = model.fit(X_tr, y_tr,
                      batch_size=64,
                      epochs=200, 
                      verbose=1,
                      validation_data=(X_val,y_val),
                      callbacks=[es])    
  evaluate_model(model, history, X_tr, y_tr)
  return history 


```



1 个答案:

答案 0 :(得分:3)

问题不仅限于Colab,而且可以在本地重现。但是,这种行为可能是不可避免的。

底部的代码是代码的最低可复制版本,已调整适合参数以加快测试速度。我观察到的是,在5次运行中,每次运行468次迭代的最大损失差异仅为 0.0144%。很好使用batch_size=6460000个样本和20个时期,您将进行18750次迭代-这将大大放大该数字。

无论如何,GPU并行性 是驱动随机数的最可能元凶-随着时间的推移,细微差异 do 累积会产生较大差异-下面的演示。如果1e-8看起来很小,请尝试将随机噪声添加到权重{/ {1}}的一半权重,并见证其生活理念的变化。

如果您不使用种子,种子的作用将变得非常明显-尝试一下,您的所有指标将在前10次迭代中迅速蔓延。另外,损耗更适合测量运行时间差异,因为精度对数值精度误差更加敏感:10个样本批次的60%精度和70%精度之间的差异是一个预测,其差异为{ {1}} wrt 1e-8-但是损失几乎不会动摇。

最后,请注意,您的超参数选择对模型性能的影响比对随机性的影响要大得多;无论您扔了多少种子,它们都不会将模型变成SOTA。 -我推荐这个fine clip


您的代码-很好。您已采取所有实际步骤来确保可重复性,但有一个例外:0.000001必须在您的Python内核启动之前设置


如何减少随机性?

  1. 重复运行,平均结果。可以理解这是很昂贵的,但是请注意,即使是完全可重复的运行也不是完全 informative ,因为 modelvariation w.r.t.训练和验证集可能比噪声引起的随机性要大得多

  2. K折叠交叉验证:可以显着减轻数据和噪声差异

  3. 更大的验证集:由于噪声,提取的特征可能相差太大;验证集越大,权重的扰动就应该越小地反映在指标中


GPU并行性:放大浮动错误

0.5

操作顺序很重要,并且通过利用多线程,GPU并行性无法保证以相同顺序执行任何操作。乍一看,差异可能看起来是无害的-但要给予足够的迭代...

PYTHONHASHSEED

......“ 1”通常是print(2. * 11. / 9.) # 2.4444444444444446 print(2. / 9. * 11.) # 2.444444444444444 的典型小权重值,而不是其原始的自我。如果1亿次迭代似乎很繁琐,请考虑一下该操作在大约半分钟内完成,而您的模型可以训练一个多小时,而前一个模型完全在CPU上运行。


最小可重复的实验

one = 1
for _ in range(int(1e8)):
    one *= (2. / 9. * 11.) / (2. * 11. / 9.)
print(one)     # 0.9999999777955395
print(1 - one) # 1.8167285897874308e-08

运行差异

1e-08