我为tensorflow版本'1.10.0'中的IRIS数据集构建了一个非常简单的分类模型。该代码在jupyter Notebook中运行良好。我正在尝试通过使用docker的Tensorflow服务进行部署和服务。尽管docker启动了,但我无法获得良好的结果。 Tensorflow服务对我来说有点新。我使用的命令和错误输出如下-
curl -d '{"instances": [1.0, 2.0, 5.0,4.2]}' -X POST http://localhost:8501/v1/models/irismodel:predict
{ "error": "You must feed a value for placeholder tensor \'y\' with dtype int32\n\t [[{{node y}} = Placeholder[_output_shapes=[<unknown>], dtype=DT_INT32, shape=<unknown>, _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"]()]]" }
用于训练和保存模型的完整代码如下。请注意,我正在使用从saved_model.simple_save获得的对象。由于未创建任何模型版本,因此我仅创建一个名为“ 1”的文件夹,然后将内容移动到该文件夹。
from numpy import genfromtxt
my_data = genfromtxt('/my/path/iris.csv', delimiter=',',skip_header =1)
my_data[149,:]
array([5.9, 3. , 5.1, 1.8, 2. ])
import tensorflow as tf
import numpy as np
n_inputs = 4 # MNIST
n_hidden1 = 3
n_hidden2 = 2
n_outputs = 3
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32,shape=(None), name="y")
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
arr = np.arange(150)
np.random.shuffle(arr)
my_data = my_data.reshape((150,5))
my_data = my_data[arr]
X_train = my_data[0:120,0:4]
X_test = my_data[120:150,0:4]
y_train = my_data[0:120,4].astype("int32")
y_test = my_data[120:150,4].astype("int32")
cursor = 0
def next_batch(X_train,y_train,batch_size):
global cursor
indices = np.arange(cursor,cursor+batch_size)
cursor = cursor + batch_size
return X_train[indices],y_train[indices]
from tensorflow import saved_model
n_epochs = 50
batch_size = 20
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
cursor = 0
for iteration in range(X_train.shape[0] // batch_size):
X_batch, y_batch = next_batch(X_train,y_train,batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch.astype("int32")})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch.astype("int32")})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test.astype("int32")})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
saved_model.simple_save(sess,
"/home/modelpath/imodel",
inputs={"X": X},
outputs={"y": y})
即使通过邮递员提交,我也会遇到类似的错误。我认为我可能在数据类型上犯了一些错误,但不确定。 错误的另一个来源可能是我发送API请求的方式。 任何指针都会有所帮助。谢谢。
答案 0 :(得分:2)
我已转载您的错误,可以解决此问题。将代码最后一行中的y
替换为tf.dtypes.cast(np.argmax(logits), dtype = "int32", name = 'y_pred')
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
cursor = 0
for iteration in range(X_train.shape[0] // batch_size):
X_batch, y_batch = next_batch(X_train,y_train,batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch.astype("int32")})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch.astype("int32")})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test.astype("int32")})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
Predicted_Output = tf.dtypes.cast(np.argmax(logits), dtype = "int32", name = 'y_pred')
save_path = saver.save(sess, "./my_model_final.ckpt")
saved_model.simple_save(sess, "IRIS_Data_Export", inputs={"X": X}, outputs={"y": Predicted_Output})
输出如下所示:
{
"outputs": 0
}