损失输出为无

时间:2017-01-07 06:33:52

标签: tensorflow

我必须微调VGG。有五个卷积层,然后是三个完全连接的层。最后一个完全连接层的输出是损耗函数的输入。以下是我的代码:

class vgg16:
    def __init__(self, imgs1,imgs2, weights=None, sess=None):

        self.imgs1 = imgs1
        self.imgs2 = imgs2

        with tf.variable_scope("siamese") as scope:
            self.o1 = self.convlayers(imgs1)
        self.fc_layers()

        self.loss()

        if weights is not None and sess is not None:
                self.load_weights(weights, sess)
            scope.reuse_variables()
            self.o2 = self.convlayers(imgs2)
        self.fc_layers()
        self.loss()

        if weights is not None and sess is not None:
             self.load_weights(weights, sess)
        #create loss function


    def convlayers(self,imgs):
        ....

        # conv1_2
        with tf.name_scope('conv1_2') as scope:
            ......
        # pool1


    ..
)

        .....

        # pool5
        self.pool5 = tf.nn.max_pool(self.conv5_3,
                               ksize=[1, 2, 2, 1],
                               strides=[1, 2, 2, 1],
                               padding='SAME',
                               name='pool4')

    def fc_layers(self):
        # fc1
        with tf.name_scope('fc1') as scope:
            ....
        # fc2
        with tf.name_scope('fc2') as scope:
            ...

        # fc3
        with tf.name_scope('fc3') as scope:
            fc3w = tf.Variable(tf.truncated_normal([4096, 1000],
                                                     dtype=tf.float32,
                                                     stddev=1e-1), name='weights')
        fc3b = tf.Variable(tf.constant(1.0, shape=[1000], dtype=tf.float32),
                             trainable=True, name='biases')
        self.fc3l = tf.nn.bias_add(tf.matmul(self.fc2, fc3w), fc3b)
    def load_weights(self, weight_file, sess):
        weights = np.load(weight_file)
        keys = sorted(weights.keys())
        for i, k in enumerate(keys):
            print i, k, np.shape(weights[k])
            sess.run(self.parameters[i].assign(weights[k]))
    def loss(self):

    loss=tf.nn.l2_loss(self.fc3l)


    self.train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)



if __name__ == '__main__':
    sess = tf.Session()
    imgs1 = tf.placeholder(tf.float32, [None, 224, 224, 3])#jis size ka bhi imaeg hai usko 224x224 may kar diya or RGB chaeay hmay
    imgs2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    vgg = vgg16(imgs1,imgs2, 'vgg16_weights.npz', sess)


    img1 = imread('laska.png', mode='RGB')
    img1 = imresize(img1, (224, 224))
    img2 = imread('laska2.jpg', mode='RGB')
    img2 = imresize(img2,(224, 224))

    prob = sess.run(vgg.train_step, feed_dict={vgg.imgs1: [img1],vgg.imgs2: [img2]})
    print('loss is:')
    print(prob)

问题是prob的输出是None。请说明我做错了什么。

PS:我正在追随暹罗建筑。这两个分支的输入都是不同的图像。

1 个答案:

答案 0 :(得分:3)

op self.train_step不返回任何内容,只是计算渐变和更新变量。请参阅here

您需要做的是在loss班级中保存对vgg16张量的引用,如下所示:

self.loss=tf.nn.l2_loss(self.fc3l)

然后在单sess.run中执行train_step和loss操作:

_, loss_value = sess.run([vgg.train_step, vgg.loss], feed_dict=...)
print('loss is:')
print(loss_value)