尝试使用VNFaceObservation捕获x:y:position

时间:2018-01-18 20:54:33

标签: ios swift face-recognition

我使用以下代码来捕捉和绘制脸部的特征。但是,尝试捕获x:y:位置在图像之间不一致。我意识到我没有正确捕捉它,所以如果有人能提供一些指导。我想使用x:y:添加另一个图像的子视图。非常感激。 约翰。

 var noseCrestPoint = CGPoint()
    var noseCrestPointX = CGFloat()
    var noseCrestPointY  = CGFloat()

    context?.saveGState()
    context?.setStrokeColor(UIColor.yellow.cgColor)
    if let landmark = face.landmarks?.noseCrest {
        for i in 0...landmark.pointCount - 1 { // last point is 0,0
            let point = landmark.normalizedPoints[i]
            if i == 0 {
                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))
                noseCrestPoint = CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h)
                noseCrestPointX =  noseCrestPoint.x
                noseCrestPointY =  noseCrestPoint.y
            } else {
                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))
            }
        }
    }
    context?.setLineWidth(3.0)
    context?.drawPath(using: .stroke)
    context?.saveGState()

1 个答案:

答案 0 :(得分:0)

好的,我找到了解决方案。第一个代码块是绘制面部,第二个是将我的另一个图像放在眼睛附近的边界框内。

    // draw the face rect
    let w = face.boundingBox.size.width * image.size.width
    let h = face.boundingBox.size.height * image.size.height
    let x = face.boundingBox.origin.x * image.size.width
    let y = face.boundingBox.origin.y * image.size.height
    let faceRect = CGRect(x: x, y: y, width: w, height: h)
    context?.saveGState()
    context?.setStrokeColor(UIColor.red.cgColor)
    context?.setLineWidth(3.0)
    context?.addRect(faceRect)
    context?.drawPath(using: .stroke)
    context?.restoreGState()



let eyeImageView = UIImageView()
    let eyeImage = UIImage (named:"eyes2.png")
   // eyeImageView.frame = CGRect(x: x, y: y, width: w/2, height: h/4)
    eyeImageView.image = eyeImage;
    eyeImageView.image?.draw(in: CGRect(x: x + (w/4), y: y, width: w/1.5, height: h/3))

这解决了我的问题。 希望有人觉得这很有帮助。 问候 JZ