将Vision boundingBox从VNFaceObservation转换为rect以在图像上绘制

时间:2017-06-23 14:45:56

标签: ios swift computer-vision face-detection vision-api

我正在尝试使用新Vision API中的boundingBox来检测图片上的面孔。然后,我在每个检测到的脸上画一个红色矩形。

但我在将VNFaceObservationCGRect转换为let request=VNDetectFaceRectanglesRequest{request, error in var final_image=UIImage(ciImage: image) if let results=request.results as? [VNFaceObservation]{ for face_obs in results{ UIGraphicsBeginImageContextWithOptions(final_image.size, false, 1.0) final_image.draw(in: CGRect(x: 0, y: 0, width: final_image.size.width, height: final_image.size.height)) var rect=face_obs.boundingBox /*/*/*/ RESULT 2 is when I uncomment this line to "flip" the y /*/*/*/ //rect.origin.y=1-rect.origin.y let conv_rect=CGRect(x: rect.origin.x*final_image.size.width, y: rect.origin.y*final_image.size.height, width: rect.width*final_image.size.width, height: rect.height*final_image.size.height) let c=UIGraphicsGetCurrentContext()! c.setStrokeColor(UIColor.red.cgColor) c.setLineWidth(0.01*final_image.size.width) c.stroke(conv_rect) let result=UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() final_image=result! } } DispatchQueue.main.async{ self.image_view.image=final_image } } let handler=VNImageRequestHandler(ciImage: image) DispatchQueue.global(qos: .userInteractive).async{ do{ try handler.perform([request]) }catch{ print(error) } } 时遇到问题。似乎我唯一的问题是 y origin

这是我的代码:

let rect=face_obs.boundingBox
let x=rect.origin.x*final_image.size.width
let w=rect.width*final_image.size.width
let h=rect.height*final_image.size.height
let y=final_image.size.height*(1-rect.origin.y)-h
let conv_rect=CGRect(x: x, y: y, width: w, height: h)

目前为止的结果如下。

结果1(不翻转y) https://api.steampowered.com/IEconDOTA2_570/GetHeroes/v0001/?key={ur_key}&language=en_us&format=JSON

结果2(翻转y) this

解决方案

我自己为rect找到了一个解决方案。

{{1}}

然而,我认为@ wei-jay的答案是好的,因为它更优雅。

4 个答案:

答案 0 :(得分:6)

您必须根据图像进行转换和缩放。 Example

func drawVisionRequestResults(_ results: [VNFaceObservation]) {
    print("face count = \(results.count) ")
    previewView.removeMask()

    let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -self.view.frame.height)

    let translate = CGAffineTransform.identity.scaledBy(x: self.view.frame.width, y: self.view.frame.height)

    for face in results {
        // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.
        let facebounds = face.boundingBox.applying(translate).applying(transform)
        previewView.drawLayer(in: facebounds)
    }
}

答案 1 :(得分:4)

我尝试了多种方式,这对我来说最有效:

dispatch_async(dispatch_get_main_queue(), ^{
    VNDetectedObjectObservation * newObservation = request.results.firstObject;
    if (newObservation) {
        self.lastObservation = newObservation;
        CGRect transformedRect = newObservation.boundingBox;
        CGRect convertedRect = [self.previewLayer rectForMetadataOutputRectOfInterest:transformedRect];
        self.highlightView.frame = convertedRect;
    }
});

答案 2 :(得分:4)

有内置的方法可以帮您实现。 要从规范化格式转换,请使用以下代码:

func VNImageRectForNormalizedRect(_ normalizedRect: CGRect, _ imageWidth: Int, _ imageHeight: Int) -> CGRect

反之亦然:

func VNNormalizedRectForImageRect(_ imageRect: CGRect, _ imageWidth: Int, _ imageHeight: Int) -> CGRect

类似的积分方法:

func VNNormalizedFaceBoundingBoxPointForLandmarkPoint(_ faceLandmarkPoint: vector_float2, _ faceBoundingBox: CGRect, _ imageWidth: Int, _ imageHeight: Int) -> CGPoint
func VNImagePointForNormalizedPoint(_ normalizedPoint: CGPoint, _ imageWidth: Int, _ imageHeight: Int) -> CGPoint

答案 3 :(得分:2)

var rect = CGRect() rect.size.height = viewSize.height * boundingBox.width rect.size.width = viewSize.width * boundingBox.height rect.origin.x = boundingBox.origin.y * viewSize.width rect.origin.y = boundingBox.origin.x * viewSize.height