我对Keras和深度学习非常陌生,我一直想打印部分图层的输出(名为output[x]
)
在下面您可以找到体系结构的一部分。请注意,我没有提供任何可复制的代码。
目标是验证val_loss
函数。这涉及分类交叉熵函数,其中公式(简化的)表示为:
L=−y⋅log(ŷ)
我在运行体系结构时给出L,因为模型输出L
。 y
是我的基本事实,ŷ
是我的估计。
目标:打印output1,output2,output3,output4和output5的值,这样我就知道我的ŷ
是什么了。有了这些给定的变量,我便可以验证所使用的公式。
layer_1 = Conv2D(filters[0], kernel_size[0], activation='relu', strides=strides)(inputs)
layer_11 = MaxPooling2D(pool_size=pool_size, strides=maxp_strides, padding='valid')(layer_1)
layer_2 = Conv2D(filters[1], kernel_size[1], activation='relu', strides=strides, padding='same')(layer_11)
layer_3 = Conv2D(filters[2], kernel_size[2], strides=strides, activation='relu', padding='same')(layer_2)
layer_4 = Conv2D(filters[3], kernel_size[3], strides=strides, activation='relu', padding='same')(layer_3)
layer_5 = Conv2D(filters[4], kernel_size[4], strides=strides, activation='relu', padding='same')(layer_4)
layer_6 = Flatten()(layer_5)
layer_7 = Dense(units[0], activation='relu')(layer_6)
layer_7 = Dropout(dropout)(layer_7)
layer_8 = Dense(units[1], activation='relu')(layer_7)
layer_8 = Dropout(dropout)(layer_8)
output1 = Dense(output_class, activation='softmax')(layer_8)
output2 = Dense(output_class, activation='softmax')(layer_8)
output3 = Dense(output_class, activation='softmax')(layer_8)
output4 = Dense(output_class, activation='softmax')(layer_8)
output5 = Dense(output_class, activation='softmax')(layer_8)
rms = optimizers.RMSprop(lr=lr, rho=rho, epsilon=epsilon, decay=decay)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=factor, patience=patience, min_lr=min_lr)
early_stopping = EarlyStopping(monitor='val_loss', patience=ES_patience)
model = Model(inputs=inputs, outputs=[output1, output2, output3, output4, output5])
model.compile(optimizer=rms, loss='categorical_crossentropy', metrics=['categorical_crossentropy'])
history = model.fit(X_train, [self.y_train[:, 0, :], self.y_train[:, 1, :], self.y_train[:, 2, :],self.y_train[:, 3, :], self.y_train[:, 4, :]],batch_size=batch_size,epochs=epoch, validation_split=val_split, verbose=verbose, callbacks=[reduce_lr])
这是我尝试过的
print("outputs: {} - output1 {} - output2 {} - output3 {} - output4 {} - output5 {}".format(model.output,output1,output2,output3,output4,output5))
# outputs: [<tf.Tensor 'dense_38/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_39/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_40/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_41/Softmax:0' shape=(?, 3) dtype=float32>, <tf.Tensor 'dense_42/Softmax:0' shape=(?, 3) dtype=float32>] - output1 Tensor("dense_38/Softmax:0", shape=(?, 3), dtype=float32) - output2 Tensor("dense_39/Softmax:0", shape=(?, 3), dtype=float32) - output3 Tensor("dense_40/Softmax:0", shape=(?, 3), dtype=float32) - output4 Tensor("dense_41/Softmax:0", shape=(?, 3), dtype=float32) - output5 Tensor("dense_42/Softmax:0", shape=(?, 3), dtype=float32)
和
print(model.layers[-1].output)
# Tensor("dense_42/Softmax:0", shape=(?, 3), dtype=float32)
答案 0 :(得分:0)
问题是,当您以这种方式定义图形时,其中没有任何值。 output1
只是一个占位符。如果您想可视化/显示/绘制图形中的任何内容(甚至检查计算图),建议您查看TensorBoard。