我试图从图形中剥离出初始化操作,作为(大概)如何使用此包装器的一个简单示例。但是我做不到。
我不打算从命令行转换图形,实际上我是在尝试设置库模型时尝试编写一个允许在python中进行特定编辑的库。
graph = tf.Graph()
with graph.as_default():
with tf.variable_scope('signal_in'):
signal_in = tf.placeholder(tf.float32, shape=(10,40,2,1))
with tf.variable_scope('dascope1'):
conv_linear = tf.keras.layers.Conv2D( 8, (8,2), padding='valid', name='conv_linear', use_bias=True, kernel_initializer=tf.initializers.lecun_normal(seed=137), bias_initializer=tf.initializers.lecun_normal(seed=137) )(signal_in)
with tf.variable_scope('softmax'):
logits = tf.contrib.layers.fully_connected(conv_linear, 2, activation_fn=None, normalizer_fn=None, normalizer_params=None, weights_initializer=tf.initializers.lecun_normal(seed=731), weights_regularizer=None, biases_initializer=tf.initializers.lecun_normal(seed=777), biases_regularizer=None, reuse=None, variables_collections=None, outputs_collections=None, trainable=True, scope='logit')
softmax = tf.nn.softmax(logits,axis=0)
with tf.variable_scope('loss'):
l_vec = tf.placeholder(tf.float32, shape=(10,2))
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False, label_smoothing=0)(l_vec, softmax)
minimize_op = tf.train.AdamOptimizer(learning_rate=0.05).minimize(loss)
tf.global_variables_initializer()
然后:
graphdef = tf.tools.graph_transforms.TransformGraph( graph.as_graph_def(), [], [], ['remove_nodes(op=loss/init)'])
with tf.Graph().as_default() as g:
tf.import_graph_def(graphdef,name = '')
for op in g.get_operations():
if op.name.split('/')[-1] == 'init':
print('True')
返回True
那么如何使用此包装器?请注意,init op dosnt没有任何输入输出,但只有依赖项箭头作为输入。