使用张量流恢复模型:' NoneType'对象不可迭代

时间:2017-09-18 16:33:23

标签: python-2.7 tensorflow neural-network

我正在恢复具有张量流的对象。但是,我收到此错误

    return [dim.value for dim in self._dims]
TypeError: 'NoneType' object is not iterable

当我定义optimzer时:

train = optimizer.minimize(lossBatch)

我测试了随机生成的权重,效果很好。

def init_weights(shape):
    return tf.Variable(tf.random_uniform(shape, -0.01, 0.01, seed=0))

所以我得出的结论是,这个问题与权重的恢复有关。

要恢复我正在执行此操作的权重:

with tf.Session() as sess:




        new_saver = tf.train.import_meta_graph('my-model-88500.meta')
        new_saver.restore(sess, 'my-model-88500')
        w_h1=  tf.get_default_graph().get_tensor_by_name("w_h1:0")
        b_h1 = tf.get_default_graph().get_tensor_by_name("b_h1:0")
        w_h2 = tf.get_default_graph().get_tensor_by_name("w_h2:0")    
        b_h2 = tf.get_default_graph().get_tensor_by_name("b_h2:0")    
        w_h3 = tf.get_default_graph().get_tensor_by_name("w_h3:0")    
        b_h3 = tf.get_default_graph().get_tensor_by_name("b_h3:0")    
        w_o =  tf.get_default_graph().get_tensor_by_name("w_o:0")    
        b_o =  tf.get_default_graph().get_tensor_by_name("b_o:0")

        w_h1=tf.reshape(w_h1,[numberInputs,numberHiddenUnits1],'w_h1')
        b_h1=tf.reshape(b_h1,[numberHiddenUnits1],'b_h1')
        w_h2=tf.reshape(w_h2,[numberHiddenUnits1,numberHiddenUnits2],'w_h2')        
        b_h2=tf.reshape(b_h2,[numberHiddenUnits2],'b_h2')
        w_h3=tf.reshape(w_h3,[numberHiddenUnits2,numberHiddenUnits3],'w_h3')        
        b_h3=tf.reshape(b_h3,[numberHiddenUnits3],'b_h3')
        w_o=tf.reshape(w_o,[numberHiddenUnits3,numberOutputs],'w_o')        
        b_o=tf.reshape(b_o,[numberOutputs],'b_o')        

        init = tf.initialize_all_variables()
        sess.run(init)        

然后我重新定义网络:

        numberEpochs=1500000
        batchSize=25000    
        learningRate=0.000001     

        numberOutputs=np.shape(theTrainOutput)[1]    
        numberTrainSamples=np.shape(theTrainInput)[0]
        numberInputs=np.shape(theTrainInput)[1]

        xTrain=tf.placeholder("float",[numberTrainSamples,numberInputs])
        yTrain=tf.placeholder("float",[numberTrainSamples,numberOutputs])  
        yTrainModel=model(xTrain,w_h1,b_h1,w_h2,b_h2,w_h3,b_h3,w_o,b_o)   


        xBatch=tf.placeholder("float",[batchSize,numberInputs])
        yBatch=tf.placeholder("float",[batchSize,numberOutputs])  
        yBatchModel=model(xBatch,w_h1,b_h1,w_h2,b_h2,w_h3,b_h3,w_o,b_o)   


        lossBatch = tf.reduce_mean(tf.abs(yBatch-yBatchModel)) 
        optimizer = tf.train.AdamOptimizer(learningRate)
        train = optimizer.minimize(lossBatch)

我在上面的最后一行收到错误!请注意,在重新定义整个网络以重用权重之前。

值得一提的是,我能够得到一个重量的形状,即

w_h1.get_shape()
TensorShape([Dimension(13), Dimension(50)])

另一方面,

 w_h1.dtype
tf.float32

此外,我还可以打印重量:

print sess.run(w_h1)  

0 个答案:

没有答案