如何使输入张量可训练

时间:2018-11-09 15:35:29

标签: python-3.x tensorflow keras

以下是在优化过程中尝试将输入图像用作训练变量的代码。它以keras模型开始,并将其转换为张量流模型。此张量流模型将张量作为输入,并尝试使用输入张量作为可训练变量来优化成本函数。

错误是:

  
    

NotImplementedError :(“尝试更新张量”,)

  

原因是输入张量不是变量。问题是如何使输入图像可训练或将张量转换为tf.variable。感谢您的帮助:

import tensorflow as tf
from keras.models import Sequential, load_model, Model
from keras import backend as K
from keras.layers.core import Dense, Dropout, Activation
import os
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io

n_classes = 10

model = Sequential()
model.add(Dense(10, input_shape=(784,)))
model.add(Activation('relu'))                            
model.add(Dense(n_classes, name='logits'))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'],  optimizer='adam')

# Write the graph in binary .pb file
outdir = "model4_tf"
try:
    os.mkdir(outdir )
except:
    pass


prefix = "simple_nn" 
name = 'output_graph.pb'
# Alias the outputs in the model - this sometimes makes them easier to access in TF
pred = []
pred_node_names = []
for i, o in enumerate(model.outputs):
    pred_node_names.append(prefix+'_'+str(i))
    pred.append(tf.identity(o, 
                            name=pred_node_names[i]))
print('Output nodes names are: ', pred_node_names)


sess = K.get_session()


constant_graph = graph_util.convert_variables_to_constants(sess,                                         
sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, outdir, name, as_text=False)


tf.reset_default_graph()

def load_graph(model_name):
    #graph = tf.Graph()
    graph = tf.get_default_graph()
    graph_def = tf.GraphDef()
    with open(model_name, "rb") as f:
        graph_def.ParseFromString(f.read())
    with graph.as_default():
        tf.import_graph_def(graph_def)
    return graph

my_graph = load_graph(model_name=os.path.join(outdir, name))


# In[15]:

input_op = my_graph.get_operation_by_name("import/dense_1_input")
output_op = my_graph.get_operation_by_name("import/simple_nn_0")
logit_op = my_graph.get_operation_by_name("import/logits/BiasAdd")


x_hat = input_op.outputs[0] # input tensor
labels = output_op.outputs[0] # label tensor
logits = logit_op.outputs[0] # logits tensor

learning_rate = tf.placeholder(tf.float32, ())

loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=[labels])
optim_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,     var_list=[x_hat])

0 个答案:

没有答案