TensorFlow:CovNet为所有示例返回相同的输出

时间:2017-09-11 14:15:18

标签: tensorflow deep-learning conv-neural-network

我在一组,100个相同的正方形和100个相同的圆圈上有200个图像。图像为44x41像素,图像为灰度。我正在尝试构建一个简单的分类器来学习张量流。

问题:无论输入图像如何,预测变量矢量总是具有相同的值。

这是我神经网络的代码:

import tensorflow as tf
import random as r
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
%matplotlib inline  

#create pictures

for i in range(100):
    fig1 = plt.figure(frameon = False, figsize=(1,1), dpi=32)
    ax1 = fig1.add_subplot(111, aspect='equal')
    posx = 0.25
    posy = 0.25
    ax1.add_patch(
        patches.Rectangle(
            (posx,posy),   # (x,y)
            0.5,          # width
            0.5,          # height
        )
    )
    ax1.axis('off')

    fig1.savefig('rect' + str(i) + '.png', bbox_inches='tight')

for i in range(100):
    fig1 = plt.figure(frameon = False, figsize=(1,1), dpi=32)
    ax1 = fig1.add_subplot(111, aspect='equal')
    posx = 0.5
    posy = 0.5
    ax1.add_patch(
        patches.Circle(
            (posx,posy),   # (x,y)
            0.3,
        )
    )
    ax1.axis('off')

    fig1.savefig('circ' + str(i) + '.png', bbox_inches='tight')

# create vectors    

train_features = np.zeros((200,44,41,1))
train_labels = np.zeros((200,2))

for i in range(100):
    #get rect
    im = Image.open("rect" + str(i) + ".png")
    im = im.convert(mode = "L")
    xxx =list(im.getdata())
    imdata = np.reshape(xxx, (44,41,1))
    train_features[i] = imdata
    train_labels[i] = np.array([0,1])
    #get circle
    im = Image.open("circ" + str(i) + ".png")
    im = im.convert(mode = "L")
    xxx = list(im.getdata())
    imdata = np.reshape(xxx, (44,41,1))
    train_features[i+100] = imdata
    train_labels[i+100] = np.array([1,0])

tf.reset_default_graph()

features = tf.placeholder(tf.float32,shape=[None,44,41, 1])
labels = tf.placeholder(tf.float32,shape=[None,2])

weights = tf.Variable(tf.truncated_normal([3,3, 1, 16], stddev=0.1)) 
biases = tf.Variable(tf.zeros(16))

weights2 = tf.Variable(tf.truncated_normal([3,3, 16, 64], stddev=0.1)) 
biases2 = tf.Variable(tf.zeros(64))

conv_layer = tf.nn.conv2d(features, weights, strides=[1, 1, 1, 1], padding='SAME')
conv_layer_b = tf.nn.bias_add(conv_layer, biases)
conv_layer_relu = tf.nn.relu(conv_layer_b)
conv_layer_pool = tf.nn.max_pool(conv_layer_relu, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='SAME')

conv_layer2 = tf.nn.conv2d(conv_layer_pool, weights2, strides=[1, 1, 1, 1], padding='SAME')
conv_layer2_b = tf.nn.bias_add(conv_layer2, biases2)
conv_layer2_relu = tf.nn.relu(conv_layer2_b)
conv_layer2_pool = tf.nn.max_pool(conv_layer2_relu, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='SAME')

#fully connected layer
weights_fc = tf.Variable(tf.truncated_normal([44*41*64, 256], stddev=0.1))
biases_fc =  tf.Variable(tf.zeros([256]))
fc = tf.reshape(conv_layer2_pool, [-1, weights_fc.get_shape().as_list()[0]])
fc_logit = tf.add(tf.matmul(fc, weights_fc), biases_fc)
fc_relu = tf.nn.relu(fc_logit)
#fc_drop = tf.nn.dropout(fc_relu, 0.75)

# final layer

weights_out = tf.Variable(tf.truncated_normal([256, 2], stddev=0.1))
biases_out = tf.Variable(tf.zeros([2]))

out = tf.add(tf.matmul(fc_relu, weights_out), biases_out)   

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=out, labels=labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for _ in range(100):
        sess.run(optimizer, feed_dict={
                features: train_features[:],
                labels: train_labels[:]})
    for i in range(200):
        outx = sess.run(out, feed_dict={
                features: [train_features[i]],
                labels: [train_labels[i]]})
        print(outx)
        print(train_labels[i])
        print('---')

1 个答案:

答案 0 :(得分:0)

尽量不要给两个张量赋予相同的名称。例如,您的conv_layer等于tf.nn.conv2d(features, weights, strides=[1, 1, 1, 1], padding='SAME'),然后重写为tf.nn.bias_add(conv_layer, biases),然后再重新设置为另一个形状然后....

使用此命名例如:

conv_layer = tf.nn.conv2d(features, weights, strides=[1, 1, 1, 1], padding='SAME')
conv_layer_b = tf.nn.bias_add(conv_layer, biases)
conv_layer_relu = tf.nn.relu(conv_layer_b)
conv_layer_pool = tf.nn.max_pool(conv_layer_relu, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='SAME')

算法一次学习一个图像。如果您的机器可以对其进行处理,请尝试提供设备中的所有图像:sess.run(optimizer, feed_dict={features: train_features[:], labels: train_labels[:]})。如果不是两个类的100个图像。图像是否被洗牌或首先出现100个圆圈而不是100个正方形?这可能是错误。您只需在最后一个循环中使用方块更新权重100次。

我可以看到完整的程序,打印预测矢量的部分吗?作为第一阶段,我会辍学;让它过度装备。然后,也许,使用较小的fc_layer(512或256),较小的学习率(0.01),并且我优先tf.get_variable('w1', shape=[3,3,1,16])而不是tf.Variable(...),使用值0.1初始化偏差。