如何在不每次加载模型的情况下进行预测-张量流?

时间:2018-11-06 16:16:09

标签: python tensorflow neural-network conv-neural-network tensorflow-serving

嗨,我正在用tf.estimators进行转换,我想用训练有素的模型进行预测,但是当我上传图像时总是会加载并关闭模型,如何使其保持加载状态并继续将图像上传到做出预测。我已经阅读了有关tf.serving的内容,但我只想在计算机上运行它

这是我的代码:

#Convolutional Funcion .......................................

def cnn_model_fn(features, labels, mode):

    """Model"""

    input_layer = tf.reshape(features["image"], [-1, 224, 224, 3])

    # Convolutional Layer #1
    conv1 = tf.layers.conv2d(
      inputs=input_layer,
      filters=64,
      kernel_size=[7, 7],
      padding="same",
      activation=tf.nn.relu)

    # Pooling Layer #1
    pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)

    # Convolutional Layer #2
    conv2 = tf.layers.conv2d(
      inputs=pool1,
      filters=128,
      kernel_size=[5, 5],
      padding="same",
      activation=tf.nn.relu)

    # Pooling Layer #2
    pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)

    # Convolutional Layer #3
    conv3 = tf.layers.conv2d(
      inputs=pool2,
      filters=192,
      kernel_size=[5, 5],
      padding="same",
      activation=tf.nn.relu)

    # Pooling Layer #3
    pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)

    # Convolutional Layer #4
    conv4 = tf.layers.conv2d(
      inputs=pool3,
      filters=192,
      kernel_size=[3, 3],
      padding="same",
      activation=tf.nn.relu)

    # Pooling Layer #4
    pool4 = tf.layers.max_pooling2d(inputs=conv4, pool_size=[2, 2], strides=2)

    # Convolutional Layer #5

    conv5 = tf.layers.conv2d(
      inputs=pool4,
      filters=128,
      kernel_size=[3, 3],
      padding="same",
      activation=tf.nn.relu)

    # Pooling Layer #5

    pool5 = tf.layers.max_pooling2d(inputs=conv5, pool_size=[2, 2], strides=2)

    # Flatten tensor into a batch of vectors
    pool5_flat = tf.reshape(pool5, [-1, 7 * 7 * 128])

    # Dense Layer
    dense = tf.layers.dense(inputs=pool5_flat, units=2048, activation=tf.nn.relu)

    # Add dropout operation; 0.6 probability that element will be kept
    dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)

    dense_1 = tf.layers.dense(inputs=dropout, units=2048, activation=tf.nn.relu)

    dropout_1 = tf.layers.dropout(inputs=dense_1, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)

    #Final Layer with 2 outputs
    logits = tf.layers.dense(inputs=dropout_1, units=2)

    #Predictions
    output=tf.nn.softmax(logits)
    predictions=tf.argmax(input=output, axis=1)
    predictions={'Probabilities':output,'Prediction':predictions}

    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode=mode,predictions=predictions)

def read_img():

    filenames = tf.constant([data_path])

    dataset = tf.data.Dataset.from_tensor_slices(filenames)

    def _parse_function(filename):
        image_string = tf.read_file(filename)
        image_decoded = tf.image.decode_jpeg(image_string)
        image_resized = tf.image.resize_images(image_decoded, [224, 224])
        return {'image':image_resized}

    dataset = dataset.map(_parse_function)

    return dataset

def main(params):

    det = tf.estimator.Estimator(model_fn=cnn_model_fn, model_dir='/Users/David/Desktop/David/Tesis')

    pred_results=det.predict(input_fn=read_img)

    print(next(pred_results))

编辑: 我每次都得到这个:

results

每次恢复参数

编辑2

我有这样的代码:

def first():
  data_path=tf.placeholder(dtype=tf.string)
  filenames = tf.constant([data_path])
  dataset = tf.data.Dataset.from_tensor_slices(filenames)
  dataset = dataset.map(_parse_function)

  iterator = dataset.make_one_shot_iterator()
  features= iterator.get_next()

  global preds
  preds=cnn_model_fn(features,None,tf.estimator.ModeKeys.PREDICT).predictions
  tf.train.Saver().restore(sess,tf.train.latest_checkpoint('/Users/David/Desktop'))


def second(path):

  try:

    while True:
      print(sess.run(preds,feed_dict={data_path:path}))

  except tf.errors.OutOfRangeError:
    print('Done')

1 个答案:

答案 0 :(得分:1)

一种选择是完全交换到TF Eager并简单地重写您的主程序,就像这样:

tf.enable_eager_execution()

def main(params):
  for data in read_img():
    print(det.model_fn(data, None, tf.estimator.ModeKeys.PREDICT).predictions)

另一种方法是使用会话:

with tf.Session() as sess:
  data = read_img().one_shot_iterator()
  preds = det.model_fn(data, None, tf.estimator.ModeKeys.PREDICT).predictions
  tf.train.Saver().restore(sess, det.model_dir)
  while True:
    sess.run(preds)

这两个都是伪代码,因此API名称等可能已关闭。