使用Keras复制结果

时间:2019-04-26 04:22:05

标签: tensorflow machine-learning keras deep-learning replication

我正在尝试使用Tensorflow和Keras(带有TF后端)复制我的实验结果。当我使用TF时,我首先在脚本顶部为numpy和tensorflow图设置了随机种子。我没有使用任何可能导致随机性(我能想到的)的辍学层或其他方法。

运行此类模型时,无论其网络大小如何,始终会产生相同的结果。

TF实验1:

('Epoch',99,'完成于',100,'损失:',289.8982433080673,'准确性:',0.6875)

TF实验2:

('Epoch',99,'完成于',100,'损失:',289.8982433080673,'准确性:',0.6875)

当我尝试使用具有相同配置的Keras复制这些结果时,我失败了。最重要的是,每个单独的运行都会产生不同的性能。

我的TF代码可以复制结果,如下所示: 摘要参考:https://www.youtube.com/watch?v=BhpvH5DuVu8&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v&index=46

from numpy.random import seed                                                                                                                           
seed(1)                                                                                                                                                 
from tensorflow import set_random_seed                                                                                                                  
set_random_seed(2)                                                                                                                                      

## import system modules                                                                                                                                
#
import os                                                                                                                                               
import sys                                                                                                                                              

## import ML modules                                                                                                                                    
#
import tensorflow as tf                                                                                                                                 
import numpy as np                                                                                                                                      
from keras.utils import to_categorical                                                                                                                  
from sklearn import preprocessing                                                                                                                       

logs_path = '../logs/'                                                                                                                                                                                                    

## Default constants                                                                                                                                    
#
NO_OF_CLASSES = 2                                                                                                                                       
BATCH_SIZE = 32                                                                                                                                         
FEAT_DIM = 26                                                                                                                                           
N_nodes_hl1 = 300                                                                                                                                       
N_nodes_hl2 = 30                                                                                                                                        
N_nodes_hl3 = 30                                                                                                                                        

## define the network architecture                                                                                                                      
#                                                                                                                                                       
## This model is a simple multilayer perceptron network with 3 hidden layers.                                                                           
## Input to the layer has the dimensions equal to feature dimensions.                                                                                   
## We create a complete graph in this method with input placeholder as an input argument and                                                            
## output placeholder as an returning argument                                                                                                          
#
def neural_network_model(data):                                                                                                                         

    ## defining dictionaries specifying the specification of each layer.                                                                                
    #
    hidden_1_layer = {'weights': tf.Variable(tf.random_normal([FEAT_DIM, N_nodes_hl1]), name='w1'),\                                                    
                      'biases': tf.Variable(tf.random_normal([N_nodes_hl1]), name='b1')}                                                                

    hidden_2_layer = {'weights': tf.Variable(tf.random_normal([N_nodes_hl1, N_nodes_hl2]), name='w2'), \                                                
                      'biases': tf.Variable(tf.random_normal([N_nodes_hl2]), name='b2')}                                                                

    hidden_3_layer = {'weights': tf.Variable(tf.random_normal([N_nodes_hl2, N_nodes_hl3]), name='w3'),\                                                 
                      'biases': tf.Variable(tf.random_normal([N_nodes_hl3]), name='b3')}                                                                

    output_layer = {'weights': tf.Variable(tf.random_normal([N_nodes_hl3, NO_OF_CLASSES]), name='w4'), \                                                
                      'biases': tf.Variable(tf.random_normal([NO_OF_CLASSES]), name='b4')}                                                              

    l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])                                                                   
    l1 = tf.nn.relu(l1)                                                                                 

    l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])                                                                     
    l2 = tf.nn.relu(l2)                                                                                

    l3 = tf.add(tf.matmul(l2, hidden_3_layer['weights']), hidden_3_layer['biases'])                                                                     
    l3 = tf.nn.relu(l3)                                                                                  

    output = tf.add(tf.matmul(l3, output_layer['weights']), output_layer['biases'], name="last_layer")                                                  

    ## return the final layer's output gracefully                                                                                                       
    #                                                                                                                                                   
    return output    

## end of method                                                                                                                                        
#                                                                                                                                                       

## This method trains a neural network along with collecting statistics related to                                                                      
## the graphs.                                                                                                                                          
#
def train_neural_network(xtrain, ytrain, odir):                                                                                                         

    learning_rate = 0.0008                                                                                                                              
    epoch_iter = 100                                                                                                                                    

    ## input/ output  placeholders where data would be plugged in...                                                                                    
    #
    x = tf.placeholder('float', [None, FEAT_DIM], name="input")                                                                                         
    y_ = tf.placeholder('float', name="output")                                                                                                         

    ## define the network                                                                                                                               
    #
    logits = neural_network_model(x)                                                                                                                    
    prediction = tf.nn.softmax(logits, name="op_to_restore") ## softmax normalizes the output results

    loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = y_) )                                                      

    ## Major OP for the training procedure. The "train" op defined here tries to minimize loss                                                          
    #
    with tf.name_scope('ADAM'):                                                                                                                         
        # Gradient Descent                                                                                                                              
        optimizer = tf.train.AdamOptimizer(learning_rate)                                                                                               
        train = optimizer.minimize(loss)                                                                                                                                                                                                       

    with tf.name_scope('Accuracy'):                                                                                                                     
        ## Accuracy calculation by comparing the predicted and detected labels                                                                          
        #
        acc = tf.equal(tf.argmax(logits, 1), tf.argmax(y_, 1))                                                                                          
        acc = tf.reduce_mean(tf.cast(acc, tf.float32))                                                                                                  

    ## summary and display variables                                                                                                                    
    #                                                                                                                                                   
    loss_sum = tf.summary.scalar("loss", loss)                                                                                                          
    acc_sum = tf.summary.scalar("accuracy", acc)                                                                                                        

   ## Merge all summaries into a single variable. This summaries will be displayed using Tensorboard                                                   
    #                                                                                                                                                   
    merged_summary_op = tf.summary.merge([loss_sum, acc_sum])                                                                                

    ## create a session for the graph (graph initialization)                                                                                            
    #                                                                                                                                                   
    with tf.Session() as sess:                                                                                                                          

        ## initialize all the variables. Note that before this point, all the variables were empty buckets !!                                           
        #
        sess.run(tf.global_variables_initializer())                                                                                                     

        ## initialize the summary writer (For tensorboard)                                                                                              
        #
        summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())                                                                 

        ## iterate over epochs (complete forward-backward for the entire training set)                                                                  
        #                                                                                                                                               
        for epoch in range(epoch_iter):                                                                                                                 

            ## initialize some variables to keep track of progress during training                                                                      
            #                                                                                                                                           
            epoch_loss = 0                                                                                                                              
            epoch_accuracy = 0                                                                                                                          

            ## minibatch training. Splitting input data in to smaller chunks is better                                                                  
            #                                                                                                                                           
            for i in range( int(len(xtrain)/ BATCH_SIZE) ):                                                                                             
                epoch_x = xtrain[ i * BATCH_SIZE : i * BATCH_SIZE + BATCH_SIZE]                                                                         
                epoch_y = ytrain[ i * BATCH_SIZE : i * BATCH_SIZE + BATCH_SIZE]                                                                         

                ## run the session and collect the intermediate stats. Feed dict kwarg takes in input/output placeholdar names as                       
                ## a key and features/labels as values                                                                                                  
                #                                                                                                                                       
                _, ac, ls, summary = sess.run([train, acc, loss, merged_summary_op], feed_dict = {x: epoch_x, y_: epoch_y})                             

                ## write the the summary in logs to visualize it later                                                                                     
                #                                                                                                                                       
                summary_writer.add_summary(summary, epoch * int(len(xtrain)/BATCH_SIZE)+i)                                                              

                ## update stats                                                                                                                         
                #                                                                                                                                       
                epoch_loss += ls                                                                                                                        
                epoch_accuracy += ac                                                                                                                    

            print ("Epoch ", epoch, " completed out of ", epoch_iter, " loss: ", epoch_loss, "accuracy: ", ac)                                          
        ## saver module to save tf graph variables.. etc....                                                                                                   

我的复制结果的Keras脚本如下:

from numpy.random import seed                                                                                                                           
seed(1)                                                                                                                                                 
from tensorflow import set_random_seed                                                                                                                  
set_random_seed(2)                                                                                                                                      


## import system modules                                                                                                                                
#                                                                                                                                                       
import os                                                                                                                                               
import sys                                                                                                                                              

## import ML and datatype modules                                                                                                                       
#                                                                                                                                                       
import numpy as np                                                                                                                                                                                                                                                                         
import keras                                                                                                                                            
from keras.models import Sequential                                                                                                                     
from keras.layers import Dense, Dropout                                                                                                                 
from keras.callbacks import ModelCheckpoint                                                                                                             
from keras.optimizers import SGD                                                                                                                        
from keras.utils import to_categorical                                                                                                                  
from sklearn import preprocessing                                                                                                                       

## Default constants                                                                                                                                    
#                                                                                                                                                       
NO_OF_CLASSES = 2                                                                                                                                       
BATCH_SIZE = 32                                                                                                                                         
FEAT_DIM = 26                                                                                                                                           
N_nodes_hl1 = 300                                                                                                                                       
N_nodes_hl2 = 30                                                                                                                                        
N_nodes_hl3 = 30                                                                                                                                        


## This method defines the NN architecture as well as performs training and saves the model info                                                        
#                                                                                                                                                       
def train_neural_network(xtrain, ytrain, odir):                                                                                                         
    learning_rate = 0.009                                                                                                                               

    ## Define the network (MLP)                                                                                                                         
    #                                                                                                                                                   
    model = Sequential()                                                                                                                                
    model.add(Dense(N_nodes_hl1, input_dim=FEAT_DIM, activation="relu"))                                                                                
    model.add(Dense(N_nodes_hl2, activation="relu"))                                                                                                    
    model.add(Dense(N_nodes_hl3, activation="relu"))                                                                                                    
    model.add(Dense(NO_OF_CLASSES, activation="softmax"))                                                                                               

    ## optimizer                                                                                                                                        
    #                                                                                                                                                   
    sgd = SGD(lr=learning_rate)                                                                                                                         
    model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=['accuracy'])                                                                 
    print model.summary()                                                                                                                               
    ## train the model                                                                                                                                  
    model.fit(x=xtrain, y=ytrain, epochs=100)

Keras实验1: 损失:0.5964-累积:0.6725

Keras实验2: 损失:0.5974-帐户:0.6712

两个脚本之间的唯一区别是优化器。我认为这不会在训练期间引入任何随机性。另外,我相信NN架构应该会产生相同的结果,并且在CPU上的精度达到float64(由于硬件功能,在GPU上达到float32)。

我的Keras脚本中缺少什么?另外,如果我对此查询中的某处理解有误,请纠正我。

此外,对于如何复制NN结果的其他参考文献(以下内容除外)也将受到高度赞赏。

https://machinelearningmastery.com/reproducible-results-neural-networks-keras/

How to get stable results with TensorFlow, setting random seed

Getting reproducible results using tensorflow-gpu

0 个答案:

没有答案