代码如下所示并完美运行:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
xData = np.array([[5, 3, 7], [1, 2, 6], [8, 7, 6]], dtype=np.float32)
yTrainData = np.array([[1], [0], [1]], dtype=np.float32)
model = Sequential()
model.add(Dense(64, input_dim=3, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(xData, yTrainData, epochs=10, batch_size=128, verbose=2)
xTestData = np.array([[2, 8, 1], [3, 1, 9]], dtype=np.float32)
resultAry = model.predict(xTestData)
print("Cal result: %s" % resultAry)
我无法在TensowFlow中编写代码,我写的内容是这样的:
import tensorflow as tf
import numpy as np
xData = np.array([[5, 3, 7], [1, 2, 6], [8, 7, 6]], dtype=np.float32)
yTrainData = np.array([[1], [0], [1]], dtype=np.float32)
x = tf.placeholder(tf.float32)
yTrain = tf.placeholder(tf.float32)
w = tf.Variable(tf.ones([64]), dtype=tf.float32)
b = tf.Variable(tf.zeros([1]), dtype=tf.float32)
y = tf.nn.relu(w * x + b)
w1 = tf.Variable(tf.ones([3]), dtype=tf.float32)
b1 = tf.Variable(0, dtype=tf.float32)
y1 = tf.reduce_mean(tf.nn.sigmoid(w1 * y + b1))
loss = tf.abs(y1 - tf.reduce_mean(yTrain))
optimizer = tf.train.AdadeltaOptimizer(0.1)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(10):
for j in range(3):
result = sess.run([loss, y1, yTrain, x, w, b, train], feed_dict={x: xData[j], yTrain: yTrainData[j]})
if i % 10 == 0:
print("i: %d, j: %d, loss: %10.10f, y1: %f, yTrain: %s, x: %s" % (i, j, float(result[0]), float(result[1]), yTrainData[j], xData[j]))
result = sess.run([y1, loss], feed_dict={x: [1, 6, 0], yTrain: 0})
print(result)
但是我跑步时会出现以下错误,
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1306, in _run_fn
status, run_metadata)
File "C:\Python36\lib\contextlib.py", line 88, in __exit__
next(self.gen)
File "C:\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64] vs. [3]
[[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable/read, _arg_Placeholder_0_0)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "testidc.py", line 36, in <module>
result = sess.run([loss, y1, yTrain, x, w, b, train], feed_dict={x: xData[j], yTrain: yTrainData[j]})
File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
run_metadata_ptr)
File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
options, run_metadata)
File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64] vs. [3]
[[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable/read, _arg_Placeholder_0_0)]]
Caused by op 'mul', defined at:
File "testidc.py", line 15, in <module>
y = tf.nn.relu(w * x + b)
File "C:\Python36\lib\site-packages\tensorflow\python\ops\variables.py", line 705, in _run_op
return getattr(ops.Tensor, operator)(a._AsTensor(), *args)
File "C:\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py", line 865, in binary_op_wrapper
return func(x, y, name=name)
File "C:\Python36\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1088, in _mul_dispatch
return gen_math_ops._mul(x, y, name=name)
File "C:\Python36\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 1449, in _mul
result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name)
File "C:\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Incompatible shapes: [64] vs. [3]
[[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Variable/read, _arg_Placeholder_0_0)]]
主要原因是W的形状必须与TensowFlow中的x相同,但在Keras中,隐藏的Dense层可能比输入具有更多的节点(例如示例中的64)。
我需要等效TensorFlow代码的帮助而不是Keras代码。感谢。
答案 0 :(得分:1)
这是一个使用tf.estimator.Estimator
框架的示例:
import tensorflow as tf
import numpy as np
# The model
def model(features):
dense = tf.layers.dense(inputs=features['x'], units=64, activation=tf.nn.relu)
dropout = tf.layers.dropout(dense, 0.2)
logits = tf.layers.dense(inputs=dropout, units=1, activation=tf.nn.sigmoid)
return logits
# Stuff needed to use the tf.estimator.Estimator framework
def model_fn(features, labels, mode):
logits = model(features)
predictions = {
'classes': tf.argmax(input=logits, axis=1),
'probabilities': tf.nn.softmax(logits)
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
loss = tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)
# Configure the training op
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.RMSPropOptimizer(learning_rate=1e-4)
train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
else:
train_op = None
accuracy = tf.metrics.accuracy(
tf.argmax(labels, axis=1), predictions['classes'])
metrics = {'accuracy': accuracy}
# Create a tensor named train_accuracy for logging purposes
tf.identity(accuracy[1], name='train_accuracy')
tf.summary.scalar('train_accuracy', accuracy[1])
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops=metrics)
# Setting up input for the model
def input_fn(mode, batch_size):
# function that processes your input and returns two tensors "samples" and "labels"
# that the estimator will use to fetch input batches.
# See https://www.tensorflow.org/get_started/input_fn for how to write this function.
return samples, labels
# Using the model
def main():
# Create the Estimator
classifier = tf.estimator.Estimator(
model_fn=model_fn, model_dir='some_dir')
# Train the model
# NOTE: I use this to make it compatible with your example, but you should
# defnitely set up your own input_fn above
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array([[5, 3, 7], [1, 2, 6], [8, 7, 6]], dtype=np.float32)},
y=np.array([[1], [0], [1]]),
num_epochs=10,
batch_size=128,
shuffle=False)
classifier.train(
input_fn=train_input_fn,
steps=20000, # change as needed
)
# Predict on new data
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array([[5, 3, 7], [1, 2, 6], [8, 7, 6]], dtype=np.float32)},
num_epochs=1,
batch_size=1,
shuffle=False)
predictions_iterator = classifier.predict(
input_fn=predict_input_fn)
print('Predictions results:')
for pred in predictions_iterator:
print(pred)
这里有很多内容,所以我会尝试逐一解释这些块。
模型在单独的tf.layers
函数中定义为model
的组合。这样做是为了保持实际model_fn
(Estimator
框架所需的)独立于模型体系结构。
该函数采用features
参数,该参数是对input_fn
的调用的输出(见下文)。在此示例中,由于我们正在使用tf.estimator.inputs.numpy_input_fn
,因此功能是包含项x:input_tensor
的字典。我们使用输入张量作为模型图的输入。
model_fn
此功能是框架所必需的,用于生成Estimator
的规范,该规范取决于正在使用的估计值mode
。通常情况下,用于预测的估算器将比用于培训时的操作少(您没有损失,优化器等)。此功能负责为三种可能的操作模式(预测,评估,培训)添加模型图所需的所有内容。
将其分解为逻辑部分,我们有:
input_fn
这是我们为估算工具提供输入的地方。
请查看Building Input Functions with tf.estimator,了解有关自定义input_fn
应如何显示的指南。例如,我们将使用框架中的numpy_input_fn
函数。
请注意,通常一个input_fn
根据mode
参数处理所有操作模式。由于我们正在使用numpy_input_fn
,因此我们需要两个不同的实例来进行培训和预测,以便根据需要提供数据。
main
这里我们实际上训练和使用估算器。
首先,我们使用我们指定的Estimator
获得model_fn
个实例,然后我们调用train()
并等待培训结束。
完成后,调用predict()
会返回iterable
,您可以使用该 Get-VMHost -name hostname | get-view | Select-Object -ExpandProperty MoRef
获取您预测的数据集中所有样本的预测结果。
答案 1 :(得分:1)
这已经有几个月了,但值得注意的是,绝对没有理由不使用具有张量流的keras。 It's even part of the tensorflow library now!
因此,如果你想要完全控制你的张量,但仍想使用keras&#39;图层,您可以通过使用keras原样轻松实现这一目标:
passport.deserializeUser((id, done) => {
User.findById(id).exec()
.then(user => {
if (user) {// user may be null
done(null, user);
} else {
done(new Error("User not found"), null);
}
})
.catch(error => {
done(error, false);
});
});
更多相关信息,keras&#39;创作者对此提出了pretty cool post。