在经过训练的模型张量流中测试当前数据集

时间:2019-11-16 11:37:47

标签: tensorflow keras

下面是代码:

def eachLayer(inputX,numberOfHiddenInputs,name,activation=tf.nn.relu):
    with tf.variable_scope(name):
        init = tf.random_normal(shape=(int(inputX.get_shape()[1]),numberOfHiddenInputs))        
        weights = tf.Variable(init,dtype="float32",name="weights")
        biases = tf.Variable(tf.zeros([numberOfHiddenInputs]),dtype='float32',name="biases")
        output=tf.matmul(inputX,weights) + biases
        if activation:
            return activation(output)
        else:
            return output

此代码块定义神经网络的eachLayer。整个DNN使用以下代码构造。

def DNN(X=X): # have defined X as placeholder beforehand
    with tf.variable_scope("dnn"):
        first_layer = eachLayer(X,hidden_,name="firstLayer")
        second_layer = eachLayer(first_layer,hidden_,name="secondLayer")
        third_layer = eachLayer(second_layer,hidden_,name="thirdLayer")
        output = eachLayer(third_layer,outputSize,name="output",activation=None)
        return output

优化器由:

给出
opt = tf.compat.v1.train.AdamOptimizer(learning_rate=0.001)
mse= tf.reduce_mean(tf.keras.losses.MSE(Y,DNN()))
min_loss=opt.minimize(loss=mse)

这是在会话中将损失最小化的部分。

with tf.Session() as sess:
    global_init = tf.global_variables_initializer()
    sess.run(global_init)
    for _ in range(epoch):
        k=0
        for eachBatch in range(noOfBatch):
            batch_xs,batch_ys = x_train[k:k+batchSize], y_train[k:k+batchSize]
            nothing= sess.run(min_loss,feed_dict={X:batch_xs,Y:batch_ys})
            theMinLossVal= mse.eval(feed_dict={X:batch_xs,Y:batch_ys})
            k=k+batchSize         
        print("THE MIN LOSS IS ==> {}".format(theMinLossVal))

所以我的问题是,如果我从这段代码中出来并且必须测试结果,我该怎么做? 这是我尝试过并失败的

outputCustom = DNN()
with tf.Session() as sess:
    y=sess.run(outputCustom,feed_dict={X:x_train[0]})
    print(y)

但是这不起作用,因为在每个Layer函数中调用DNN时会重新初始化变量,那么如何使用已经训练好的权重和偏差?如何获得所需的结果?

0 个答案:

没有答案