提供从一个模型到另一个模型的输出

时间:2018-04-14 22:10:31

标签: tensorflow

我想将一个模型(f)的输出提供给另一个模型(c)。以下代码可以使用

features_ = sess.run(f.features, feed_dict={x:x_, y:y_, dropout:1.0, training:False})

sess.run(c.optimize, feed_dict={x:x_, y:y_, features:features_, dropout:1.0, training:False})

c只需要features_y_。它不需要x_。但是,如果我尝试删除x_作为输入,即

feed_dict={y:y_, features:features_}

我收到以下错误:

  

InvalidArgumentError(参见上面的回溯):你必须为占位符张量'占位符'提供一个值,其中dtype为float和shape [?,28,28,1]        [[Node:Placeholder = Placeholderdtype = DT_FLOAT,shape = [?,28,28,1],_ device =“/ job:localhost / replica:0 / task:0 / device:CPU:0”]]

这有什么理由吗? features_是一个numpy ndarray,所以它似乎不是张量类型或类似的东西。

以下是f:

的代码
class ConvModelSmall(object):
    def __init__(self, x, y, settings, num_chan, num_features, lr, reg, dropout, training, scope):
        """ init the model with hyper-parameters etc """
        self.x = x
        self.y = y
        self.dropout = dropout
        self.training = training

        initializer = tf.contrib.layers.xavier_initializer(uniform=False)
        self.weights = get_parameters(scope=scope, initializer=initializer, dims)
        self.biases = get_parameters(scope=scope, initializer=initializer, dims)

        self.features = self.feature_model()
        self.acc = settings.acc(self.features, self.y)
        self.loss = settings.loss(self.features, self.y) + reg * reg_loss_fn(self.weights)
        update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
        with tf.control_dependencies(update_ops):
            self.optimize = tf.train.AdagradOptimizer(lr).minimize(self.loss)

    def feature_model(self):
        conv1 = conv2d('conv1', self.x, self.weights['wc1'], self.biases['bc1'], 2, self.training, self.dropout)
        conv2 = conv2d('conv2', conv1, self.weights['wc2'], self.biases['bc2'], 2, self.training, self.dropout)
        conv3 = conv2d('conv3', conv2, self.weights['wc3'], self.biases['bc3'], 2, self.training, self.dropout)

        dense1_reshape = tf.reshape(conv3, [-1, self.weights['wd1'].get_shape().as_list()[0]])
        dense1 = fc_batch_relu(dense1_reshape, self.weights['wd1'], self.biases['bd1'], self.training, self.dropout)
        dense2 = fc_batch_relu(dense1, self.weights['wd2'], self.biases['bd2'], self.training, self.dropout)

        out = tf.matmul(dense2, self.weights['wout']) + self.biases['bout']
        return out

以下是c:

的代码
class LinearClassifier(object):
    def __init__(self, features, y, training, num_features, num_classes, lr, reg, scope=""):
        self.features = features
        self.y = y
        self.num_features = num_features
        self.num_classes = num_classes

        initializer = tf.contrib.layers.xavier_initializer(uniform=False)
        self.W = get_scope_variable(scope=scope, var="W", shape=[num_features, num_classes], initializer=initializer)
        self.b = get_scope_variable(scope=scope, var="b", shape=[num_classes], initializer=initializer)

        scores = tf.matmul(tf.layers.batch_normalization(self.features, training=training), self.W) + self.b
        self.loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.y, logits=scores)) + reg * tf.nn.l2_loss(self.W)
        update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
        with tf.control_dependencies(update_ops):
            self.optimize = tf.train.GradientDescentOptimizer(lr).minimize(self.loss)

1 个答案:

答案 0 :(得分:0)

魔鬼可能在这些方面:

    update_ops = tf.get_collection( tf.GraphKeys.UPDATE_OPS )
    with tf.control_dependencies(update_ops):
        self.optimize = tf.train.GradientDescentOptimizer( lr ).minimize( self.loss )

当您定义c时,f已定义,因此当您说update_ops = tf.get_collection( tf.GraphKeys.UPDATE_OPS )时,它会收集所有当前图表中的更新操作。这将包括与其中fx相关的操作。

然后with tf.control_dependencies(update_ops):表示"只有在所有update_ops被执行后,您才应执行以下,包括给x赋值。但是x没有价值,而且错误发生了。

要解决这个问题,您可以将两个网络分成两个不同的tf.Graph,或者,当您获得update_ops时,您可以更轻松地按照{{3}中的范围对其进行过滤}} 方法。要实现这一点,您应该将tf.get_collection()添加到您的网络类ConvModelSmallLinearClassifier