我写了一个Tensorflow程序,现在我想看看我使用Tensorboard做了什么,这里是代码中有趣的部分:
$count = 0;
foreach ($card_id as $id) {
$card = ScholarCard::find($id);
if ($card->scholar_grade_level == 12) {
if ($card->scholar_GPA == 100 || $card->scholar_GPA == 99 || $card->scholar_GPA == 98 || $card->scholar_GPA == 97 || $card->scholar_GPA == 96 || $card->scholar_GPA == 95 ) {
$card_gpa = 1.0;
}elseif ($card->scholar_GPA == 94) {
$card_gpa = 1.1;
}elseif ($card->scholar_GPA == 93) {
$card_gpa = 1.2;
}elseif ($card->scholar_GPA == 92) {
$card_gpa = 1.3;
}elseif ($card->scholar_GPA == 91) {
$card_gpa = 1.4;
}elseif ($card->scholar_GPA == 90) {
$card_gpa = 1.5;
}elseif ($card->scholar_GPA == 89) {
$card_gpa = 1.6;
}elseif ($card->scholar_GPA == 88) {
$card_gpa = 1.7;
}elseif ($card->scholar_GPA == 87) {
$card_gpa = 1.8;
}elseif ($card->scholar_GPA == 86) {
$card_gpa = 1.9;
}elseif ($card->scholar_GPA == 85) {
$card_gpa = 2.0;
}elseif ($card->scholar_GPA == 84) {
$card_gpa = 2.1;
}elseif ($card->scholar_GPA == 83) {
$card_gpa = 2.2;
}elseif ($card->scholar_GPA == 82) {
$card_gpa = 2.3;
}elseif ($card->scholar_GPA == 81) {
$card_gpa = 2.4;
}elseif ($card->scholar_GPA == 80) {
$card_gpa = 2.5;
}elseif ($card->scholar_GPA == 79) {
$card_gpa = 2.6;
}elseif ($card->scholar_GPA == 78) {
$card_gpa = 2.7;
}elseif ($card->scholar_GPA == 77) {
$card_gpa = 2.8;
}elseif ($card->scholar_GPA == 76) {
$card_gpa = 2.9;
}elseif ($card->scholar_GPA == 75) {
$card_gpa = 3.0;
}else {
$card_gpa = 5.0;
}
}else{
$card_gpa = $card->scholar_GPA;
}
$scholar_id2s = ScholarCard::where('scholar_card_id','=',$card->scholar_card_id)->whereBetween($card_gpa,[$scholarship->scholarship_gpa_to, $scholarship->scholarship_gpa_from])->get();
$count++;
}
当我运行Tensorboard时,我可以看到图形,但标量选项卡是空的? 的更新
这里是输入占位符声明:
def train_neural_network(x):
prediction = neuronal_network_model(x)
tf.summary.scalar('Prediction',prediction)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
tf.summary.scalar('cost',cost)
# default step size of optimizer is 0.001
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
# epoch is feeding forward and + backpropagation (adjusting the weights and the biases )
number_of_epochs = 200
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(number_of_epochs):
shuffle_data()
epoch_loss = 0
for j in range(len(train_data)-1):
if np.shape(train_labels[j]) ==(batch_size,n_classes):
epoch_x = train_data[j]
epoch_y = train_labels[j]
_, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: np.reshape(epoch_y,(batch_size,n_classes))})
epoch_loss += c
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
tf.summary.scalar('Prediction',prediction)
# print('Epoch', epoch, 'complete out of ', number_of_epochs, 'loss', epoch_loss)
loss_vector.append(epoch_loss)
for i in range(len(test_data)-1):
if np.shape(test_labels[i]) == (batch_size,n_classes):
print('Accuracy', accuracy.eval({x: test_data[i], y: test_labels[i]}))
accuracy_vector.append(accuracy.eval({x: test_data[i], y: test_labels[i]}))
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('Tensorboard/DNN',sess.graph)
答案 0 :(得分:0)
您需要评估merged
摘要,方法与评估转储数据的其他张量相同:
_, c, smry = sess.run([optimizer, cost, merged], feed_dict={x: epoch_x, y: np.reshape(epoch_y,(batch_size,n_classes))})
train_writer.add_summary(smry, j)
其中j
是您的训练指数。显然,这必须在训练循环内进行。您可能希望每隔j
的第n个值编写摘要,以减轻摘要编写和可视化。
更多详情here。