在aws-sdk-js上使用compareFaces时获取InvalidParameterException

时间:2018-05-08 15:56:22

标签: node.js aws-sdk-js amazon-rekognition

当使用aws-sdk与nodeJS的比较面功能时,我们偶尔会看到这个错误:

InvalidParameterException: Request has Invalid Parameters
 at Request.extractError (/app/node_modules/aws-sdk/lib/protocol/json.js:48:27)
 at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
 at Request.emit (/app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
 at Request.emit (/app/node_modules/aws-sdk/lib/request.js:683:14)
 at Request.transition (/app/node_modules/aws-sdk/lib/request.js:22:10)     at AcceptorStateMachine.runTo (/app/node_modules/aws-sdk/lib/state_machine.js:14:12)
 at /app/node_modules/aws-sdk/lib/state_machine.js:26:10
 at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:38:9)
 at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:685:12)
 at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
 at Request.emit (/app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
 at Request.emit (/app/node_modules/aws-sdk/lib/request.js:683:14)     at Request.transition (/app/node_modules/aws-sdk/lib/request.js:22:10)
 at AcceptorStateMachine.runTo (/app/node_modules/aws-sdk/lib/state_machine.js:14:12)
 at /app/node_modules/aws-sdk/lib/state_machine.js:26:10
 at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:38:9)
   message: 'Request has Invalid Parameters',
   code: 'InvalidParameterException',
   time: 2018-05-08T15:27:28.188Z,
   requestId: 'XXXXX',
   statusCode: 400,
   retryable: false,
   retryDelay: 52.72405778418885 }

每次使用iPhone相机拍摄图像,保存为JPEG图像并包含面部。图像没有损坏,并已使用jpeginfo进行测试。然后将它们转换为二进制文件并通过sdk发送到rekognition。我们通过python库Boto运行相同的图像并成功接收比较结果。

我们可以在节点端采取进一步的诊断步骤来帮助调试吗?或任何洞察错误的原因?

更新:

图像尺寸: 来源:1189×750 目标:360×480

1 个答案:

答案 0 :(得分:0)

你可以做的一件事是,不是通过你的javascript代码直接调用Rekognition api,而是将你的图像上传到s3并使这个上传成为用python编写的lambda函数的触发器,lambda函数将包含比较代码然后存储将dynamodb表中的响应作为缓冲区放置,然后从dynamodb中获取数据并根据需要使用它。

它看起来是一个漫长的过程,但相信我我也在使用它,它非常简单,并且给我们的优势是处理是在远离天真用户的后端完成的。

这是比较代码的示例:     导入boto3     进口io     来自PIL导入图像

rekognition = boto3.client('rekognition', region_name='eu-west-1')
dynamodb = boto3.client('dynamodb', region_name='eu-west-1')

image = Image.open("group1.jpeg")
stream = io.BytesIO()
image.save(stream,format="JPEG")
image_binary = stream.getvalue()


response = rekognition.search_faces_by_image(
        CollectionId='family_collection',
        Image={'Bytes':image_binary}                                       
        )

for match in response['FaceMatches']:
    print (match['Face']['FaceId'],match['Face']['Confidence'])

    face = dynamodb.get_item(
        TableName='family_collection',  
        Key={'RekognitionId': {'S': match['Face']['FaceId']}}
        )

    if 'Item' in face:
       print (face['Item']['FullName']['S'])
    else:
       print ('no match found in person lookup')