我正在尝试创建一个简单的输入函数,其特征数据为数字1-10,当x <0时,标签为0。 5;当x = 5时为5,当x> 5时为10。 5。
示例:
# data
nmbrs = [10., 1., 2., 3., 4., 5., 6. , 7., 8., 9.]
labels = [10., 0., 0., 0., 0., 5., 10., 10., 10., 10.]
# input function
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'numbers': np.array(nmbrs)}, y=np.array(labels),
batch_size=batch_size, num_epochs=None, shuffle=True)
我遇到的问题是nmbrs和labels数组似乎没有以正确的形式出现,我试着把它变成一个二维数组,但是这也没有用,我确定我在这里做了一件非常容易出错的事...... < / p>
编辑:模型和神经网络功能
def neural_net(x_dict):
# TF Estimator input is a dict, in case of multiple inputs
x = x_dict['numbers']
# Hidden fully connected layer with 128 neurons
layer_1 = tf.layers.dense(x, n_hidden_1)
# Hidden fully connected layer with 128 neurons
layer_2 = tf.layers.dense(layer_1, n_hidden_2)
# Output fully connected layer with a neuron for each class
out_layer = tf.layers.dense(layer_2, num_classes)
return out_layer
# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
# Build the neural network
logits = neural_net(features)
# Predictions
pred_classes = tf.argmax(logits, axis=1)
pred_probas = tf.nn.softmax(logits)
# If prediction mode, early return
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=tf.cast(labels, dtype=tf.int32)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())