Google ML引擎:预测失败:模型执行期间出错

时间:2018-03-31 13:31:00

标签: python tensorflow google-cloud-platform

我已经使用gcloud命令行成功运行了预测。我正在尝试运行Python脚本来运行预测。但我面临的错误。

  

预测失败:模型执行期间出错:AbortionError(code = StatusCode.INVALID_ARGUMENT,details ="断言失败:[无法将字节解码为JPEG,PNG,GIF或BMP]        [[Node:map / while / decode_image / cond_jpeg / cond_png / cond_gif / Assert_1 / Assert = Assert [T = [DT_STRING],summary = 3,_device =" / job:localhost / replica:0 / task:0 / device:CPU:0"](map / while / decode_image / cond_jpeg / cond_png / cond_gif / is_bmp,map / while / decode_image / cond_jpeg / cond_png / cond_gif / Assert_1 / Assert / data_0)]]")

from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors

PROJECTID = 'ai-assignment-185606'
projectID = 'projects/{}'.format(PROJECTID)
modelName = 'food_model'
modelID = '{}/models/{}/versions/{}'.format(projectID, modelName, 'v3')

scopes = ['https://www.googleapis.com/auth/cloud-platform']
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)

with open('1.jpg', 'rb') as f:
    b64_x = f.read()
import base64
import json

name = "7_5790100434_e2c3dbfdba.jpg";
with open("images/"+name, "rb") as image_file:
    encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
    row = json.dumps({'inputs': {'b64': encoded_string}})

request_body = {"instances": row}

request = ml.projects().predict(name=modelID, body=request_body)
try:
    response = request.execute()
except errors.HttpError as err:
    print(err._get_reason())

if 'error' in response:
    raise RuntimeError(response['error'])

print(response)

answer表明版本必须相同。我检查了版本1.4和1.4.1。

2 个答案:

答案 0 :(得分:3)

根据https://cloud.google.com/ml-engine/docs/v1/predict-request,该行应该是数据列表。每个数据可以是值,JSON对象或列表/嵌套列表:

{
  "instances": [
    <value>|<simple/nested list>|<object>,
    ...
  ]
}

相反,您的行是表示JSON的文本字符串(即,收件人必须使用json.loads(行)来获取JSON)。试试这个:

instances = []
with open("images/"+name, "rb") as image_file:
    encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
    instances.append({'b64': encoded_string})

request_body = {"instances": instances}

答案 1 :(得分:0)

根据文档here,看起来格式应为:
{"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]}
但出现以下错误。
RuntimeError: Prediction failed: unknown error.

我必须添加image_bytes才能使其按照this post的要求工作。外观如下:
{"instances": [{"image_bytes": {"b64": encoded_string}, "key": "0"}]

下面的代码段:

@app.route('/predict', methods=['POST'])
def predict():
    if 'images' in request.files:
        file = request.files['images']
        image_path = save_image(file)
        # Convert image to base64
        encoded_string = base64.b64encode(open(file=image_path, mode="rb").read()).decode('utf-8')

        service = discovery.build('ml', 'v1', credentials=credentials)
        name = 'projects/{}/models/{}'.format('my-project-name', 'my-model-name')
        name += '/versions/{}'.format('v1')

        response = service.projects().predict(
            name=name,
            body= {"instances": [{"image_bytes": {"b64": encoded_string}, "key": "0"}]}

        ).execute()

        if 'error' in response:
            raise RuntimeError(response['error'])

        print(response['predictions'])
        return jsonify({'result': response['predictions']})
    else:
        return jsonify({'result': 'Since Image tag is empty, cannot predict'})