我正在尝试将以下分类图转换为回归,以便代替3个值而只返回一个值 -
baseFeatureSize = 5
keep_prob = tf.placeholder(tf.float32)
x = tf.placeholder(tf.float32, shape=[None, 64])
x_image = tf.reshape(x, [-1, 8, 8, 1])
W_conv1 = weight_variable([5, 5, 1, baseFeatureSize])
b_conv1 = bias_variable([baseFeatureSize])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
W_conv2 = weight_variable([8, 8, baseFeatureSize, baseFeatureSize * 2])
b_conv2 = bias_variable([baseFeatureSize * 2])
h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2)
W_fc1 = weight_variable([8 * 8 * baseFeatureSize * 2, baseFeatureSize * 4])
b_fc1 = bias_variable([baseFeatureSize * 4])
h_pool2_flat = tf.reshape(h_conv2, [-1, 8 * 8 * baseFeatureSize * 2])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc3 = weight_variable([baseFeatureSize * 4, 3])
b_fc3 = bias_variable([3])
y_policy = tf.placeholder(tf.float32, shape=[None, 3])
y_policy_conv = tf.matmul(h_fc1, W_fc3) + b_fc3
cross_entropy_policy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_policy, logits=y_policy_conv))
train_step_policy = tf.train.AdamOptimizer(learning_rate = 0.01).minimize(cross_entropy_policy)
为了使其成为回归,我改变了完全连通的部分 - W_fc3,b_fc3,输出,交叉熵和train_step,使张量形状尺寸为1而不是3,如下图所示(图的其余部分保持不变) -
W_fc3 = weight_variable([baseFeatureSize * 4, 1])
b_fc3 = bias_variable([1])
y_policy = tf.placeholder(tf.float32, shape=[None, 1])
y_policy_conv = tf.nn.softmax(tf.matmul(h_fc1, W_fc3) + b_fc3)
cross_entropy_policy = tf.reduce_mean(-tf.reduce_sum(y_policy * tf.log(y_policy_conv), reduction_indices=1))
train_step_policy = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy_policy)
但它一直在抛出以下错误 -
InvalidArgumentError(参见上面的回溯):Assign要求两个张量的形状匹配。 lhs shape = [1] rhs shape = [3]
我无法在任何地方看到3。可能有什么不对?
答案 0 :(得分:0)
供将来参考:见Achilles的评论:
对不起,我刚才意识到出了什么问题。主管的logdir指向该模型的先前版本(分类版本)。我把它指向一个空的logdir并且它有效。感谢您对此进行调查。 - 阿基里斯