我有一个3-D张量的形状<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/white_three"
app:layout_behavior="@string/appbar_scrolling_view_behavior">
<LinearLayout
android:layout_above="@+id/bottomLayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<android.support.v7.widget.RecyclerView
android:id="@+id/orderList"
android:layout_height="0dp"
android:layout_weight="1"
android:layout_width="match_parent"/>
<TextView
android:id="@+id/errorLoadingText"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:textSize="15sp"
android:visibility="gone"
android:layout_marginTop="120dp"
android:gravity="center"/>
</LinearLayout>
<LinearLayout
android:id = "@+id/bottomLayout"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@color/white_three"
android:layout_alignParentBottom="true"
android:orientation="vertical">
<TextView
android:id="@+id/paymentButtonTop"
android:layout_width="match_parent"
android:layout_height="43dp"
android:background="@drawable/button_radius_toggle_confirm"
android:layout_marginTop="15dp"
android:layout_marginStart="15dp"
android:layout_marginEnd="15dp"
android:layout_marginBottom="75dp"
android:text="@string/prepayment"
android:textColor="@color/warm_grey_two"
android:enabled="false"
android:textSize="17sp"
android:gravity="center" />
<TextView
android:id="@+id/paymentButton"
android:layout_width="match_parent"
android:layout_height="43dp"
android:background="@drawable/button_radius_toggle_confirm"
android:layout_margin="15dp"
android:visibility="gone"
android:enabled="false"
android:textColor="@color/warm_grey_two"
android:text="@string/prepayment"
android:textSize="17sp"
android:gravity="center" />
</LinearLayout>
</RelativeLayout>
,其中第二个维度,即时间步长,是未知的。我使用[batch, None, dim]
来处理此类输入,如下面的代码段所示:
dynamic_rnn
实际上,运行这个实际数字时,我有一些合理的结果:
import numpy as np
import tensorflow as tf
batch = 2
dim = 3
hidden = 4
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
输出是:
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, {inputs: inputs_, lengths: lengths_})
print(output_)
有没有办法通过动态RNN的最后相关输出获得形状[[[ 0. 0. 0. 0. ]
[ 0.02188676 -0.01294564 0.05340237 -0.47148666]
[ 0.0343586 -0.02243731 0.0870839 -0.89869428]
[ 0. 0. 0. 0. ]]
[[ 0.00284752 -0.00315077 0.00108094 -0.99883419]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
的三维张量?谢谢!
答案 0 :(得分:13)
这是gather_nd的用途!
def extract_axis_1(data, ind):
"""
Get specified elements along the first axis of tensor.
:param data: Tensorflow tensor that will be subsetted.
:param ind: Indices to take (one for each element along axis 0 of data).
:return: Subsetted tensor.
"""
batch_range = tf.range(tf.shape(data)[0])
indices = tf.stack([batch_range, ind], axis=1)
res = tf.gather_nd(data, indices)
return res
在你的情况下:
output = extract_axis_1(output, lengths - 1)
现在output
是尺寸[batch_size, num_cells]
的张量。
答案 1 :(得分:8)
来自以下两个来源,
http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/
outputs, last_states = tf.nn.dynamic_rnn(
cell=cell,
dtype=tf.float64,
sequence_length=X_lengths,
inputs=X)
或https://github.com/ageron/handson-ml/blob/master/14_recurrent_neural_networks.ipynb,
很明显,last_states可以直接从dynamic_rnn调用的SECOND输出中提取。它将为您提供所有图层的last_states(在LSTM中它是从LSTMStateTuple编译的),而输出包含最后图层中的所有状态。
答案 2 :(得分:4)
好的 - 所以,看起来实际上 是一个更简单的解决方案。正如@Shao Tang和@Rahul所提到的,执行此操作的首选方法是访问最终的单元状态。原因如下:
tf.nn.dynamic_rnn
返回最终状态时,它实际上会返回您感兴趣的最终隐藏权重。为了证明这一点,我只是调整了您的设置并得到了结果:GRUCell Call(rnn_cell_impl.py):
def call(self, inputs, state):
"""Gated recurrent unit (GRU) with nunits cells."""
if self._gate_linear is None:
bias_ones = self._bias_initializer
if self._bias_initializer is None:
bias_ones = init_ops.constant_initializer(1.0, dtype=inputs.dtype)
with vs.variable_scope("gates"): # Reset gate and update gate.
self._gate_linear = _Linear(
[inputs, state],
2 * self._num_units,
True,
bias_initializer=bias_ones,
kernel_initializer=self._kernel_initializer)
value = math_ops.sigmoid(self._gate_linear([inputs, state]))
r, u = array_ops.split(value=value, num_or_size_splits=2, axis=1)
r_state = r * state
if self._candidate_linear is None:
with vs.variable_scope("candidate"):
self._candidate_linear = _Linear(
[inputs, r_state],
self._num_units,
True,
bias_initializer=self._bias_initializer,
kernel_initializer=self._kernel_initializer)
c = self._activation(self._candidate_linear([inputs, r_state]))
new_h = u * state + (1 - u) * c
return new_h, new_h
解决方案:
import numpy as np
import tensorflow as tf
batch = 2
dim = 3
hidden = 4
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, state = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_, state_ = sess.run([output, state], {inputs: inputs_, lengths: lengths_})
print (output_)
print (state_)
输出:
[[[ 0. 0. 0. 0. ]
[-0.24305521 -0.15512943 0.06614969 0.16873555]
[-0.62767833 -0.30741733 0.14819752 0.44313088]
[ 0. 0. 0. 0. ]]
[[-0.99152333 -0.1006391 0.28767768 0.76360202]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
[[-0.62767833 -0.30741733 0.14819752 0.44313088]
[-0.99152333 -0.1006391 0.28767768 0.76360202]]
对于使用LSTMCell(另一种流行选项)的其他读者来说,情况有所不同。 LSTMCell以不同的方式维护状态 - 单元状态是实际单元状态和隐藏状态的元组或连接版本。因此,要访问最终隐藏权重,您可以在单元初始化期间设置(is_state_tuple
到True
),最终状态将是元组:(最终单元状态,最终隐藏权重)。所以,在这种情况下,
_,(_,h)= tf.nn.dynamic_rnn(cell,inputs,length,initial_state = cell_state)
会给你最终的权重。
参考文献: c_state and m_state in Tensorflow LSTM https://github.com/tensorflow/tensorflow/blob/438604fc885208ee05f9eef2d0f2c630e1360a83/tensorflow/python/ops/rnn_cell_impl.py#L308 https://github.com/tensorflow/tensorflow/blob/438604fc885208ee05f9eef2d0f2c630e1360a83/tensorflow/python/ops/rnn_cell_impl.py#L415
答案 3 :(得分:2)
实际上,解决方案并不那么难。我实现了以下代码:
slices = []
for index, l in enumerate(tf.unstack(lengths)):
slice = tf.slice(rnn_out, begin=[index, l - 1, 0], size=[1, 1, 3])
slices.append(slice)
last = tf.concat(0, slices)
因此,完整的代码段如下:
import numpy as np
import tensorflow as tf
batch = 2
dim = 3
hidden = 4
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
slices = []
for index, l in enumerate(tf.unstack(lengths)):
slice = tf.slice(output, begin=[index, l - 1, 0], size=[1, 1, 3])
slices.append(slice)
last = tf.concat(0, slices)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
outputs = sess.run([output, last], {inputs: inputs_, lengths: lengths_})
print 'RNN output:'
print(outputs[0])
print
print 'last relevant output:'
print(outputs[1])
输出:
RNN output:
[[[ 0. 0. 0. 0. ]
[-0.06667092 -0.09284072 0.01098599 -0.03676109]
[-0.09101103 -0.19828682 0.03546784 -0.08721405]
[ 0. 0. 0. 0. ]]
[[-0.00025157 -0.05704876 0.05527233 -0.03741353]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
last relevant output:
[[[-0.09101103 -0.19828682 0.03546784]]
[[-0.00025157 -0.05704876 0.05527233]]]