我试图在我的RNN模型中命名初始状态,以便我可以从我的.pb文件中调用它。有没有人知道如何解决这个问题?
我采取的步骤:
(1)列车模型
(2)保存模型
(3)冷冻模型> (.pb)
我现在正试图命名我的状态节点,以便在推理期间使用它们:
(1)我已将输入和输出节点命名为
(2)我已经命名了我的final_state节点
(3)我无法命名我的Initial_state节点
命名Initial_state节点的代码:
initial_state = tf.identity(cell.zero_state(batch_size,tf.int32),name =“initial_state”)
我在尝试保存模型时遇到错误:
TypeError:'Tensor'对象不可迭代。
RNN型号代码:
def build_graph(
cell_type = None,
state_size = state_size,
num_classes = num_classes,
batch_size = batch_size,
num_steps = num_steps,
build_with_dropout = False,
learning_rate = learning_rate):
# clean up any residual Tensorflow objects
reset_graph()
# data placeholders
x = tf.placeholder(tf.int32, [batch_size, num_steps], name='x')
y = tf.placeholder(tf.int32, [batch_size, num_steps], name='y')
# dropout placeholder
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# embeddings are more efficient than one hot encoding
# create a lookup table to your inputs that can run in parallel
embeddings = tf.get_variable('embedding_matrix', [num_classes, state_size])
rnn_inputs = tf.nn.embedding_lookup(embeddings, x)
# pick cell type
if cell_type == 'GRU':
cell = tf.nn.rnn_cell.GRUCell(state_size)
elif cell_type == 'LSTM':
cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)
elif cell_type == 'LN_LSTM':
cell = LayerNormalizedLSTMCell(state_size)
else:
cell = tf.nn.rnn_cell.BasicRNNCell(state_size)
# add dropout
if build_with_dropout:
cell = tf.nn.rnn_cell.DropoutWrapper(cell, input_keep_prob=keep_prob)
# initialize state
init_state = tf.identity(cell.zero_state(batch_size, tf.int32),name="init_state")
# dynamic_rnn
rnn_outputs, states = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state)
#with tf.control_dependencies([init_state.assign(states)]):
# rnn_outputs = tf.identity(rnn_outputs)
# These are there simply to give the nodes a name. I.e. they take a node, add another node to it
# (which doesn't do much, just identity transform), but crucially, that node can have a name, which
# will be saved in the frozen graph can later be accessed in C++. (Note, tf.identity also merges
# multiple tensors together, but that's another detail).
final_state = tf.identity(states,name="final_state")
with tf.variable_scope('softmax'):
W = tf.get_variable('W', [state_size, num_classes])
b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))
# reshape to get last output
rnn_outputs = tf.reshape(rnn_outputs, [-1, state_size])
y_reshaped = tf.reshape(y, [-1])
logits = tf.matmul(rnn_outputs, W) + b
predictions = tf.nn.softmax(logits)
# minimize the mean squared errors.
total_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped))
# pick optimizer
train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)
return dict(
x = x,
y = y,
keep_prob = keep_prob,
init_state = init_state,
final_state = final_state,
total_loss = total_loss,
train_step = train_step,
preds = predictions,
saver = tf.train.Saver()
)
答案 0 :(得分:0)
这样的参考代码, https://github.com/BrotherJing/RNN_tabletennis
initial_state_c = tf.get_collection("initial_state_c")
initial_state_h = tf.get_collection("initial_state_h")
initial_state = []
for i in range(len(initial_state_c)):
initial_state.append((initial_state_c[i], initial_state_h[i]))