我有两张带有40多个人脸的图像。我想使用AWS Rekognition服务检测在两个图像中重复了哪些面孔。
原始方法是使用Rekognition的IndexFaces
函数并将一个图像的所有面孔存储在一个集合中,将另一图像的面孔存储在另一个集合中,然后使用它们的FaceId
比较它们。我以为IndexFaces
会为每张脸提供指纹,但碰巧FaceId
只是一个随机标识符,而不是脸部的指纹。
我找到了这个答案How to compare faces in a Collection to faces in a Stored Video using AWS Rekognition?,但是将集合中的所有面孔与视频中出现的面孔进行了比较,因此我将不得不将其中一张图像转换为1秒的视频,其中仅包含该图像作为帧..我认为这违背了易于使用的目的。
它必须是比较两个识别集合的一种方法,以便检查Im找不到的重复图像。
答案 0 :(得分:1)
有两种方法可以解决此问题:
选项1:使用ExternalImageID
这类似于您的方法。
重要的部分是,当将面孔添加到集合中时,您可以提供ExternalImageID
。稍后,当该面部与图像匹配时,Amazon Rekognition将返回该面部的ExternalImageID
。
例如,您可以在ExternalImageID
中存储一个人的姓名或唯一标识符。
因此,您的过程可能如下所示:
DetectFaces()
FaceDetails
的列表以及每个面孔的边界框IndexFaces()
,每次提供一个ExternalImageID
(它可以是一个递增的数字)IndexFaces()
ExternalImageID
选项2:使用CompareFaces()
将源输入图像中的面部与目标输入图像中检测到的100个最大面部中的每个面部进行比较。
这将获取一个输入面部(源图像中最大的面部)并将其与目标图像中的所有面部进行比较。因此,您将遵循与上述类似的过程:
DetectFaces()
FaceDetails
的列表以及每个面孔的边界框CompareFaces()
,并将其与图像2进行比较请参阅:Comparing Faces in Images - Amazon Rekognition
因此,如果仅比较两个图像,第二种方法会更容易。如果您已经存储了希望在以后的通话中再次使用的个人面孔,则第一种方法会更好。
答案 1 :(得分:0)
由于@John Rotenstein,我能够使用was控制台制作一个快速的原型:
假设我们已在系统上安装了所有权限并安装了AWS控制台,并在其中存储了所有图像的S3存储桶名为“ TestBucket”,则执行以下操作:
> aws rekognition create-collection --collection-id "MainCollection"
IndexFace
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"cristian.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "cristian.jpg"
生成的FaceID为'a54ef57e-7003-4721-b7e1-703d9f039da9'
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"ImageContaining40plusfaces.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "ImageContaining40plusfaces.jpg"
产生了40多个这样的条目,为简洁起见仅显示了一个:
{
"FaceRecords": [
{
"FaceDetail": {
"Confidence": 99.99859619140625,
"Eyeglasses": {
"Confidence": 54.99907684326172,
"Value": false
},
"Sunglasses": {
"Confidence": 54.99971389770508,
"Value": false
},
"Gender": {
"Confidence": 54.747318267822266,
"Value": "Male"
},
"Landmarks": [
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "eyeLeft"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "eyeRight"
},
{
"Y": 0.3355730175971985,
"X": 0.19253292679786682,
"Type": "mouthLeft"
},
{
"Y": 0.3361922800540924,
"X": 0.2005564421415329,
"Type": "mouthRight"
},
{
"Y": 0.32276451587677,
"X": 0.19691102206707,
"Type": "nose"
},
{
"Y": 0.30642834305763245,
"X": 0.1876278519630432,
"Type": "leftEyeBrowLeft"
},
{
"Y": 0.3037400245666504,
"X": 0.19379760324954987,
"Type": "leftEyeBrowRight"
},
{
"Y": 0.3029193580150604,
"X": 0.19078010320663452,
"Type": "leftEyeBrowUp"
},
{
"Y": 0.3041592836380005,
"X": 0.1995924860239029,
"Type": "rightEyeBrowLeft"
},
{
"Y": 0.3074571192264557,
"X": 0.20519918203353882,
"Type": "rightEyeBrowRight"
},
{
"Y": 0.30346789956092834,
"X": 0.2024637758731842,
"Type": "rightEyeBrowUp"
},
{
"Y": 0.3115418553352356,
"X": 0.1898096352815628,
"Type": "leftEyeLeft"
},
{
"Y": 0.3118479251861572,
"X": 0.1935078650712967,
"Type": "leftEyeRight"
},
{
"Y": 0.31028062105178833,
"X": 0.19159308075904846,
"Type": "leftEyeUp"
},
{
"Y": 0.31250447034835815,
"X": 0.19164365530014038,
"Type": "leftEyeDown"
},
{
"Y": 0.31221893429756165,
"X": 0.19937492907047272,
"Type": "rightEyeLeft"
},
{
"Y": 0.3123391270637512,
"X": 0.20295380055904388,
"Type": "rightEyeRight"
},
{
"Y": 0.31087613105773926,
"X": 0.2013435810804367,
"Type": "rightEyeUp"
},
{
"Y": 0.31308478116989136,
"X": 0.20125225186347961,
"Type": "rightEyeDown"
},
{
"Y": 0.3264555335044861,
"X": 0.19483911991119385,
"Type": "noseLeft"
},
{
"Y": 0.3265785574913025,
"X": 0.19839303195476532,
"Type": "noseRight"
},
{
"Y": 0.3319154679775238,
"X": 0.196599081158638,
"Type": "mouthUp"
},
{
"Y": 0.3392537832260132,
"X": 0.19649912416934967,
"Type": "mouthDown"
},
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "leftPupil"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "rightPupil"
},
{
"Y": 0.31476160883903503,
"X": 0.18458032608032227,
"Type": "upperJawlineLeft"
},
{
"Y": 0.3398161828517914,
"X": 0.18679481744766235,
"Type": "midJawlineLeft"
},
{
"Y": 0.35216856002807617,
"X": 0.19623762369155884,
"Type": "chinBottom"
},
{
"Y": 0.34082692861557007,
"X": 0.2045571506023407,
"Type": "midJawlineRight"
},
{
"Y": 0.3160339295864105,
"X": 0.20668834447860718,
"Type": "upperJawlineRight"
}
],
"Pose": {
"Yaw": 4.778820514678955,
"Roll": 1.7387386560440063,
"Pitch": 11.82911205291748
},
"Emotions": [
{
"Confidence": 47.9405403137207,
"Type": "CALM"
},
{
"Confidence": 45.432857513427734,
"Type": "ANGRY"
},
{
"Confidence": 45.953487396240234,
"Type": "HAPPY"
},
{
"Confidence": 45.215728759765625,
"Type": "SURPRISED"
},
{
"Confidence": 50.013206481933594,
"Type": "SAD"
},
{
"Confidence": 45.30225372314453,
"Type": "CONFUSED"
},
{
"Confidence": 45.14192199707031,
"Type": "DISGUSTED"
}
],
"AgeRange": {
"High": 43,
"Low": 26
},
"EyesOpen": {
"Confidence": 54.95812225341797,
"Value": true
},
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"Smile": {
"Confidence": 53.493797302246094,
"Value": false
},
"MouthOpen": {
"Confidence": 53.51670837402344,
"Value": false
},
"Quality": {
"Sharpness": 53.330047607421875,
"Brightness": 81.31917572021484
},
"Mustache": {
"Confidence": 54.971839904785156,
"Value": false
},
"Beard": {
"Confidence": 54.136474609375,
"Value": false
}
},
"Face": {
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"FaceId": "570eb8a6-72b8-4381-a1a2-9112aa2b348e",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 99.99859619140625,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
}
}
]
}
SearchFacesById
:> aws rekognition search-faces --face-id "a54ef57e-7003-4721-b7e1-703d9f039da9" --collection-id "MainCollection"
还有瞧!我根据需要在第二张源图像上检测到人脸...
{
"SearchedFaceId": "a54ef57e-7003-4721-b7e1-703d9f039da9",
"FaceModelVersion": "4.0",
"FaceMatches": [
{
"Face": {
"BoundingBox": {
"Width": 0.022825799882411957,
"Top": 0.31017398834228516,
"Left": 0.4018920063972473,
"Height": 0.06067270040512085
},
"FaceId": "bfd58e70-2bcf-403a-87da-6137c28ccbdd",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 100.0,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
},
"Similarity": 92.36637115478516
}
]
}
因此,现在我必须对源图像nº1中检测到的所有其他面部图像执行相同的操作,然后使用同一组命令将它们与从源图像nº2中检测到的图像进行比较!