深度学习模型不学习LSTM和conv3d

时间:2019-04-12 07:31:40

标签: python tensorflow machine-learning deep-learning lstm

我是深度学习的新手。我正在尝试使用conv3d和lstm训练模型以识别2个动作(步行和慢跑)。我在线下载了数据集并开始训练。我编写的代码如下,我尝试使用不同的参数进行不同的训练,但似乎什么也没学。验证的准确性约为0.5。 为了表明该模型没有学到任何东西,我什至将我的训练数据输入到“训练过的”模型中,得出的精度也约为0.5。

基本上,我从将视频读入帧开始。我将框架堆叠在一起,然后放入我设计的模型中。输入的大小为(-1,10,60,80,1)。该模型可能不是很好,但是我认为,即使是不好的模型也不应达到约0.5或低于0.5(这发生在我身上)。

我在这个分类中有2个班级。我将[0,1]分配为慢跑,将[1,0]分配为步行。我正在使用softmax_cross_entropy_with_logits作为我的损失函数,亚当优化程序可以最大程度地减少损失。我尝试了从4到20的不同时期。整个程序都成功运行,没有任何错误,但是模型根本没有学习

请告诉我..谢谢

train_x, valid_x, train_y,valid_y =  train_test_split(videoframes, videolabels, test_size=0.2, random_state=4)
train_x = np.moveaxis(train_x,3,2)
valid_x = np.moveaxis(valid_x,3,2)
train_x = np.expand_dims(train_x,4)
valid_x = np.expand_dims(valid_x,4)





timesteps = 10 
num_hidden = 64 # hidden layer num of features
num_classes = 2

weights = {

    'w1':tf.Variable(tf.truncated_normal([10,5,5,1,32])),
    'w2':tf.Variable(tf.truncated_normal([1,5,5,32,64])),    #[filter_depth, filter_height, filter_width, in_channels, out_channels]
    'w3':tf.Variable(tf.truncated_normal([1,5,5,64,128])),
    'wd1':tf.Variable(tf.truncated_normal([15*20*128,2048])),
#     'wd1':tf.Variable(tf.truncated_normal([2048,1024])),
    'out':tf.Variable(tf.random_normal([num_hidden, num_classes])),
    'out2':tf.Variable(tf.truncated_normal([10,num_classes])),
    'out3':tf.Variable(tf.truncated_normal([512,num_classes])),
    'out4':tf.Variable(tf.truncated_normal([128,num_classes]))



}


biases = {

    'b1':tf.Variable(tf.truncated_normal([32])),
    'b2':tf.Variable(tf.truncated_normal([64])),
    'b3':tf.Variable(tf.truncated_normal([128])),
    'bd1':tf.Variable(tf.truncated_normal([2048])),
    'out': tf.Variable(tf.random_normal([num_classes])),
    'out2':tf.Variable(tf.truncated_normal([num_classes])),
    'out3':tf.Variable(tf.truncated_normal([num_classes])),
    'out4':tf.Variable(tf.truncated_normal([num_classes]))



}






X = tf.placeholder(tf.float32, [None,10,60,80,1])   #[batch, in_depth, in_height, in_width, in_channels]
Y = tf.placeholder(tf.float32, [None,2])
keep_prob = tf.placeholder(tf.float32) 
learning_rate = 1e-3


def conv3d(x, W, b, strides=1):
    x = tf.nn.conv3d(x, W, strides=[1,1,strides, strides, 1], padding='SAME')
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)


def maxpool3d(x, k=2):
    return tf.nn.max_pool3d(x, ksize=[1,1, k, k, 1], strides=[1,1,k, k, 1],
                      padding='SAME')


# Create model
def conv_net(x, weights, biases,prob):

    conv1 = conv3d(x, weights['w1'], biases['b1'])
    conv1 = maxpool3d(conv1, k=2)

    conv2 = conv3d(conv1, weights['w2'], biases['b2'])
    conv2 = maxpool3d(conv2, k=2)

    conv3 = conv3d(conv2, weights['w3'], biases['b3'])
#     conv3 = maxpool3d(conv3, k=2)

    lstmInput = tf.reshape(conv3,[-1,timesteps,15*20*128])


    x = tf.unstack(lstmInput, timesteps, 1)

    # Define a lstm cell with tensorflow
    lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(num_hidden, forget_bias=1.0)

    # Get lstm cell output
    outputs, states =tf.nn.static_rnn(lstm_cell, x, dtype=tf.float32)

    hidden1=tf.matmul(outputs[-1], weights['out']) + biases['out']
    hidden1 = tf.nn.dropout(hidden1,prob)


    return hidden1





# Construct model
logits = conv_net(X, weights, biases, keep_prob)

prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)


# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
saver = tf.train.Saver()

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()


# In[6]:


batch_size  = 4 #was 5, 4 is working
#timestep = 120
training_step = int(train_x.shape[0]/batch_size)
epochs = 20
display_step = training_step
# Start training
with tf.Session() as sess:

    # Run the initializer
    sess.run(init)
    accumulateloss = []


    for epoch in range(epochs):
    epoch_error=0.0
    for step in range(training_step):
        batch_x = train_x[step:(step+1)*batch_size]
        batch_y = train_y[step:(step+1)*batch_size]

        epoch_error += sess.run([loss_op, train_op], {
        X: batch_x,Y: batch_y,keep_prob:0.5
    })[0]
    epoch_error /= training_step
    accumulateloss.append(epoch_error)

    print ("Epoch %d, train error: %.8f " % (epoch, epoch_error))
    save_path = saver.save(sess, "/tmp/model.ckpt")



    plt.scatter(range(epochs),accumulateloss)



    print("Optimization Finished!")


    acc1=0.0
    for rounds in range(valid_y.shape[0]):
    acc1 += sess.run(accuracy, feed_dict={X: valid_x[rounds:(rounds+1)*batch_size], Y: valid_y[rounds:(rounds+1)*batch_size], keep_prob:1})
    print("validation accuracy : %.10f"%(acc1/valid_y.shape[0]))

结果如下:

Epoch 0, train error: 2.91496966 
Epoch 1, train error: 2.21184034 
Epoch 2, train error: 1.57414323 
Epoch 3, train error: 1.18109844 
Epoch 4, train error: 0.85242794 
Epoch 5, train error: 0.74477599 
Epoch 6, train error: 0.72522819 
Epoch 7, train error: 0.72902458 
Epoch 8, train error: 0.72817225 
Epoch 9, train error: 0.72278470 
Epoch 10, train error: 0.72984007 
Epoch 11, train error: 0.72846894 
Epoch 12, train error: 0.73262202 
Epoch 13, train error: 0.71890939 
Epoch 14, train error: 0.71445327 
Epoch 15, train error: 0.72622124 
Epoch 16, train error: 0.71814767 
Epoch 17, train error: 0.72212768 
Epoch 18, train error: 0.72110982 
Epoch 19, train error: 0.71905692 
Optimization Finished!
validation accuracy : **0.5126860671**

Graph plotted

1 个答案:

答案 0 :(得分:0)

谢谢。我删除了一些我认为多余的代码