我正在尝试构建一个简单的2个输入神经元网络(+1偏差)进入1个输出神经元,以教它“和”功能。它是基于mnist-clissification的例子,因此对于任务而言可能过于复杂,但它是关于我这样的网的一般结构,所以请不要说“你可以在numpy中做它”或者其他东西,它是关于tensorflow NNs对我来说。所以这是代码:
import tensorflow as tf
import numpy as np
tf.logging.set_verbosity(tf.logging.INFO)
def model_fn(features, labels, mode):
input_layer = tf.reshape(features["x"], [-1, 2])
output_layer = tf.layers.dense(inputs=input_layer, units=1, activation=tf.nn.relu, name="output_layer")
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=output_layer)
loss = tf.losses.mean_squared_error(labels=labels, predictions=output_layer)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
eval_metrics_ops = {"accuracy": tf.metrics.accuracy(labels=labels, predictions=output_layer)}
return tf.estimator.EstimatorSpec(mode=mode, predictions=output_layer, loss=loss)
def main(unused_arg):
train_data = np.asarray(np.reshape([[0,0],[0,1],[1,0],[1,1]],[4,2]))
train_labels = np.asarray(np.reshape([0,0,0,1],[4,1]))
eval_data = train_data
eval_labels = train_labels
classifier = tf.estimator.Estimator(model_fn=model_fn, model_dir="/tmp/NN_AND")
tensors_to_log = {"The output:": "output_layer"}
logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log,every_n_iter=10)
train_input_fn = tf.estimator.inputs.numpy_input_fn(x={"x":train_data}, y=train_labels, batch_size=10, num_epochs=None, shuffle=True)
classifier.train(input_fn=train_input_fn, steps=2000, hooks=[logging_hook])
eval_input_fn = tf.estimator.inputs.numpy_input_fn(x={"x":eval_data}, y=eval_labels, batch_size=1, shuffle=False)
eval_results = classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
if __name__ == "__main__":
tf.app.run()
答案 0 :(得分:2)
我对您的代码进行了一些细微的修改,以便学习and
函数:
1)将train_data
更改为float32表示。
train_data = np.asarray(np.reshape([[0,0],[0,1],[1,0],[1,1]],[4,2]), dtype=np.float32)`
2)从输出层移除relu激活 - 一般来说,不建议在输出层使用relus。这可能会导致死亡,并且所有渐变都将等于零,这反过来又无法进行任何学习。
output_layer = tf.layers.dense(inputs=input_layer, units=1, activation=None, name="output_layer")
3)在eval_metrics_ops
中,确保对结果进行舍入,以便实际测量准确度:
eval_metrics_ops = {"accuracy": tf.metrics.accuracy(labels=labels, predictions=tf.round(output_layer))}
4)不要忘记将您定义的eval_metrics_ops
参数添加到估算工具中:
return tf.estimator.EstimatorSpec(mode=mode, predictions=output_layer, loss=loss, eval_metric_ops=eval_metrics_ops)
此外,要记录最后一层输出,您应该使用:
tensors_to_log = {"The output:": "output_layer/BiasAdd:0"}