Google CloudML serve_input_receiver_fn()b64解码错误

时间:2019-02-18 03:50:52

标签: image tensorflow base64 tensorflow-serving google-cloud-ml

我正在通过AJAX POST将base64编码的图像发送到存储在Google CloudML中的模型。我收到一条错误消息,告诉我input_fn():无法解码图像并将其转换为jpeg。

错误:

Prediction failed: Error during model execution: 
AbortionError(code=StatusCode.INVALID_ARGUMENT,  
details="Expected image (JPEG, PNG, or GIF), got 
unknown format starting with 'u\253Z\212f\240{\370
\351z\006\332\261\356\270\377' [[{{node map/while
/DecodeJpeg}} = DecodeJpeg[_output_shapes=
[[?,?,3]], acceptable_fraction=1, channels=3, 
dct_method="", fancy_upscaling=true, ratio=1, 
try_recover_truncated=false, 
_device="/job:localhost/replica:0 /task:0
/device:CPU:0"](map/while/TensorArrayReadV3)]]") 

下面是完整的Serving_input_receiver_fn():

  1. 我认为的第一步是处理传入的b64编码的字符串并将其解码。这样做是:

    image = tensorflow.io.decode_base64(image_str_tensor)

  2. 我相信的下一步是打开字节,但这是我不知道如何使用张量流代码处理解码的b64字符串的地方,并且需要帮助。

使用python Flask应用程序可以通过以下方式完成:

image = Image.open(io.BytesIO(decoded))
  1. 传递字节以被tf.image.decode_jpeg解码????

image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)

完整的input_fn():代码

def serving_input_receiver_fn(): 
   def prepare_image(image_str_tensor): 
   image = tensorflow.io.decode_base64(image_str_tensor)
   image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
   image = tensorflow.expand_dims(image, 0) image = tensorflow.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
   image = tensorflow.squeeze(image, axis=[0])    
   image = tensorflow.cast(image, dtype=tensorflow.uint8) 
   return image

如何将b64字符串解码回jpeg,然后将jpeg转换为张量?

1 个答案:

答案 0 :(得分:0)

这是处理b64图像的示例。

HEIGHT = 224
WIDTH = 224
CHANNELS = 3
IMAGE_SHAPE = (HEIGHT, WIDTH)
version = 'v1'

def serving_input_receiver_fn():
    def prepare_image(image_str_tensor):
        image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
        return image_preprocessing(image)

    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
        prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
    images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver(
        {'input': images_tensor},
        {'image_bytes': input_ph})

export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path):  # clean up old exports with this version
    shutil.rmtree(export_path)
estimator.export_savedmodel(
    export_path,
    serving_input_receiver_fn=serving_input_receiver_fn)