张量流一对一的序列训练

时间:2016-09-14 14:32:02

标签: tensorflow recurrent-neural-network lstm

我正在使用tensorflow

在双向部分我希望实现声音分析,我看到的所有例子都是从序列到一种。我想要一对一的网络。

依次为一部分

def BiRNN(x, weights, biases):
    #...some code... 
    return tf.matmul(outputs[-1], weights['out']) + biases['out']  

pred = BiRNN(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

但我希望像这样一对一:

def BiRNN(x, weights, biases):
    #...some code... 
    for re in range(n_steps):
        outputs[re] = tf.matmul(outputs[re], weights['out']) + biases['out']
    outputs=tf.pack(outputs)
    return[outputs]

pred = BiRNN(x, weights, biases)

cost = tf.reduce_mean(tf.pow(pred - y, 2))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

但是我不确定这是做正确的方法

有人能指出我正确的方向吗?

1 个答案:

答案 0 :(得分:0)

这可以解决问题。 msn是0到n_steps之间的随机数

outputs=tf.pack(outputs)
    print(outputs)
    outputs=tf.reshape(outputs,[n_steps,6])

    print(tf.slice(outputs,[msn,0],[1,6]))
    return         [tf.slice(outputs,[msn,0],[1,6])-tf.slice(y,[msn,0],[1,6]),outputs]


pred = BiRNN(x,y, weights, biases,msn)

# Define loss and optimizer
cost = tf.reduce_mean(tf.pow(pred[0], 2))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)