用Java编写的Tensorflow服务客户端未给出正确的结果

时间:2019-06-07 07:32:26

标签: java python tensorflow tensorflow-serving

很长的问题很抱歉。但是请帮忙。

我已经用Java编写了一个tensorflow服务客户端,该客户端请求托管在另一台机器上的tensorflow服务器。通信是通过GRPC进行的,并且工作正常,即响应请求。但是,随之而来的回应是错误的。该模型的工作是在客户发送的照片中预测人类(戴着头盔和不戴着头盔)(模型很好)。

所以这个问题可能是由于格式化图像中的一些错误(可能是尺寸方面的错误等)引起的。但是我试了好几天却没有弄清楚所有的小细节。

此外,为此,我也用python编写了一个客户端,令人惊讶的是它可以正常工作。服务器的响应是正确的。但是我需要在Java中执行此操作。简而言之,我将使用Java和python客户端将相同的图像发送到同一台服务器,并得到两个不同的结果。

在这里我输入了两个客户的代码:

Python-

#PYTHON_CLIENT

from __future__ import print_function
from grpc.beta import implementations
import tensorflow as tf
import glob
import json
from object_detection.utils import visualization_utils as vis_util
from object_detection.utils import plot_util
from object_detection.utils import label_map_util
import object_detection.utils.ops as utils_ops
from PIL import Image
from google.protobuf import json_format as _json_format
import numpy as np
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
from object_detection.protos import string_int_label_map_pb2
from object_detection.utils import visualization_utils as vis_util
import cv2
import numpy as np



tf.app.flags.DEFINE_string('server', '<someIPaddress>:9000', 'PredictionService host:port')
tf.app.flags.DEFINE_string('image', './', 'path to image in JPEG format')
FLAGS = tf.app.flags.FLAGS


def out(result):
  detection_boxes=[]
  detection_scores =[]
  detection_classes =[]
  db=[]
  dc=[]
  ds=[]

  db.append(result.outputs['detection_boxes'].tensor_shape.dim[0].size)
  db.append(result.outputs['detection_boxes'].tensor_shape.dim[1].size)
  db.append(result.outputs['detection_boxes'].tensor_shape.dim[2].size)

  detection_boxes = np.asarray((result.outputs['detection_boxes'].float_val))
  detection_boxes =  detection_boxes.reshape([db[0],db[1],db[2]])
  print(detection_boxes)

  detection_classes = np.asarray((result.outputs['detection_classes'].float_val))

  dc.append(result.outputs['detection_classes'].tensor_shape.dim[0].size)
  dc.append(result.outputs['detection_classes'].tensor_shape.dim[1].size)
  detection_classes = detection_classes.reshape([dc[0],dc[1]])
  print(detection_classes)

  detection_scores = np.asarray((result.outputs['detection_scores'].float_val))

  ds.append(result.outputs['detection_scores'].tensor_shape.dim[0].size)
  ds.append(result.outputs['detection_scores'].tensor_shape.dim[1].size)
  detection_scores = detection_scores.reshape([dc[0],dc[1]])
  print(detection_scores)

  return detection_classes,detection_scores,detection_boxes




def main(_):

  host, port = FLAGS.server.split(':')
  channel = implementations.insecure_channel(host, int(port))
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

  # Create prediction request object  
  request = predict_pb2.PredictRequest()
  request.model_spec.name = 'deeplab'
  request.model_spec.signature_name = 'predict_images'

  image_data = []

  for image in glob.glob(FLAGS.image+'cde.jpg'):
    # with open(image, 'rb') as f:
    image = cv2.imread(image)    
    image = image.astype('f')
    # image = np.expand_dims(image,0)
    image_data.append(image)
    # print(cv2.imread(image))


  image_data2 = np.asarray(image_data)  
  # image_data = np.expand_dims(image_data,4)
  request.inputs['inputs'].CopyFrom(tf.contrib.util.make_tensor_proto(image_data2, dtype=tf.uint8 ,shape=None))  
  result = stub.Predict(request, 10.0)  # 10 secs timeout
  m=[]
  n=[]
  p=[]
  print(result.outputs)
  category_index = label_map_util.create_category_index_from_labelmap('/home/<somePathHere>/labels.pbtxt', use_display_name=True)

  # Visualization of the results of a detection. # image_data = np.expand_dims(image_data,4)
  request.inputs['inputs'].CopyFrom(tf.contrib.util.make_tensor_proto(image_data2, dtype=tf.uint8 ,shape=None))

  vis_util.visualize_boxes_and_labels_on_image_array(
            image_data,
            p,
            m,
            n,
            category_index,
            min_score_thresh=.5,
            # instance_masks=output_dict.get('detection_masks'),
            use_normalized_coordinates=True,
            line_thickness=8,
            )

if __name__ == '__main__':
  tf.app.run()

Java-

//JAVA_CLIENT

public static void main(String[] args) {
    String host = "<someIPaddress>";
    int port = 9000;
    String modelName = "deeplab";
    long modelVersion = 1;

    // Run predict client to send request
    PredictClientt_One client = new PredictClientt_One(host, port);

    try {
        client.do_predict(modelName, modelVersion);
    } catch (Exception e) {
        System.out.println(e);
    } finally {
        try {
            client.shutdown();
        } catch (Exception e) {
            System.out.println(e);
        }
    }
}

public void shutdown() throws InterruptedException {
    channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
}


public void do_predict(String modelName, long modelVersion) {

    // Generate image file to array
    int[][][][] featuresTensorData = new int[1][1080][1920][3];

    String[] imageFilenames = new String[]{"./cde.jpg"};

    for (int i = 0; i < imageFilenames.length; i++) {

        // Convert image file to multi-dimension array
        File imageFile = new File(imageFilenames[i]);
        try {
            BufferedImage preImage = ImageIO.read(imageFile);
            BufferedImage image = new BufferedImage(preImage.getWidth(), preImage.getHeight(), BufferedImage.TYPE_INT_ARGB); //convert to argb
            image.getGraphics().drawImage(preImage, 0, 0, null);

            logger.info("Start to convert the image: " + imageFile.getPath());

            int imageWidth = 1920;
            int imageHeight = 1080;

            for (int row = 0; row < imageHeight; row++) {
                for (int column = 0; column < imageWidth; column++) {
                    Color col = new Color (image.getRGB(column, row));

                    // int red = (pixel >> 16) & 0xff;
                    // int green = (pixel >> 8) & 0xff;
                    // int blue = (pixel) & 0xff;

                    //tried all combination of red, green and blue in [0], [1] and [2]
                    featuresTensorData[i][row][column][0] = col.getBlue(); //blue;
                    featuresTensorData[i][row][column][1] = col.getGreen(); //green
                    featuresTensorData[i][row][column][2] = col.getRed(); //red;
                }
            }
        } catch (IOException e) {
            logger.log(Level.WARNING, e.getMessage());
            System.exit(1);
        }
    }

    // Generate features TensorProto
    TensorProto.Builder featuresTensorBuilder = TensorProto.newBuilder();

    for (int i = 0; i < featuresTensorData.length; ++i) {
        for (int j = 0; j < featuresTensorData[i].length; ++j) {
            for (int k = 0; k < featuresTensorData[i][j].length; ++k) {
                for (int l = 0; l < featuresTensorData[i][j][k].length; ++l) {
                    featuresTensorBuilder.addFloatVal(featuresTensorData[i][j][k][l]);
                }
            }
        }
    }

    TensorShapeProto.Dim featuresDim1 = TensorShapeProto.Dim.newBuilder().setSize(1).build();
    TensorShapeProto.Dim featuresDim2 = TensorShapeProto.Dim.newBuilder().setSize(1080).build();
    TensorShapeProto.Dim featuresDim3 = TensorShapeProto.Dim.newBuilder().setSize(1920).build();
    TensorShapeProto.Dim featuresDim4 = TensorShapeProto.Dim.newBuilder().setSize(3).build();
    TensorShapeProto featuresShape = TensorShapeProto.newBuilder().addDim(featuresDim1).addDim(featuresDim2).addDim(featuresDim3).addDim(featuresDim4).build();
    featuresTensorBuilder.setDtype(org.tensorflow.framework.DataType.DT_UINT8).setTensorShape(featuresShape);
    TensorProto featuresTensorProto = featuresTensorBuilder.build();

    // Generate gRPC request
    com.google.protobuf.Int64Value version = com.google.protobuf.Int64Value.newBuilder().setValue(modelVersion).build();
    Model.ModelSpec modelSpec = Model.ModelSpec.newBuilder().setName(modelName).setVersion(version).build();
    Predict.PredictRequest request = Predict.PredictRequest.newBuilder().setModelSpec(modelSpec).putInputs("inputs", featuresTensorProto).build();

    // Request gRPC server
    Predict.PredictResponse response;
    try {
        response = blockingStub.predict(request);
        java.util.Map<java.lang.String, org.tensorflow.framework.TensorProto> outputs = response.getOutputsMap();
        for (java.util.Map.Entry<java.lang.String, org.tensorflow.framework.TensorProto> entry : outputs.entrySet()) {
            System.out.println("Key: " + entry.getKey() + ",\nValue: " + entry.getValue());
        }
    } catch (StatusRuntimeException e) {
        logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
        return;
    }
}

服务器的响应以包含四个键值对的哈希映射(或字典)的形式出现:

{
   'detection_scores':    <some value>,
   'detection_boxes':    <some value>,
   'detection_classes':    <some value>,
   'num_detections':    <some value>
}

python的'detection_scores'的值如下:0.9 ..,0.8 ...,0.7 ...,0.1 ...,0.04 ...(因此检测到3个人)。

因此,java的'detection_scores'的值起始于:0.005 ..(在同一张照片中)。此外,所有边界框也都位于照片的最左侧,而python的边界框位于人脸上。

请帮助。并感谢您耐心阅读!

1 个答案:

答案 0 :(得分:0)

我正在回答自己的问题,就像我刚找到解决方案一样。

我需要解决的事情是将addFloatVal()更改为addIntVal()

这里:

    TensorProto.Builder featuresTensorBuilder = TensorProto.newBuilder();

    for (int i = 0; i < featuresTensorData.length; ++i) {
        for (int j = 0; j < featuresTensorData[i].length; ++j) {
            for (int k = 0; k < featuresTensorData[i][j].length; ++k) {
                for (int l = 0; l < featuresTensorData[i][j][k].length; ++l) {
                    featuresTensorBuilder.addFloatVal(featuresTensorData[i][j][k][l]); //In this line
                }
            }
        }
    }

这样的小问题,我花了整整2天的时间竭尽所能!伤心。