如何在所有iPhone设备和模拟器上将UIView置于UIImage之上

时间:2018-12-26 23:17:57

标签: swift xcode view uiimageview pixel

我正在尝试将视图放置在图像顶部。 我有一个xCode App,其中的UIImageView中带有iPhone图像,而UIView则将其自身放置在图像顶部以显示iPhone屏幕。 它可以在早期版本中使用,因为图像是静态大小,但是现在我需要缩放图像以填充iPhone X等新设备的屏幕。认为Aspect Fit是答案的一部分。 我需要相关视图才能在所有iPhone设备和模拟器上正确定位。 我有一些似乎适用于设备的代码,但是相同的代码在模拟器中的工作方式有所不同。 我需要代码在模拟器和设备中工作,因为我没有要测试的所有设备。

我如何通过故事板或代码将矩形放置在系统正在缩放的​​图像上方? 需要一种适用于所有设备和模拟器的通用解决方案。

包括我一直在尝试的粗略代码示例。

  • WeedplatesViewController.swift文件包含的设备代码似乎可以正确放置在设备上,然后复制了类似的代码,并进行了调整以测试不能正确放置的模拟器。有一个UIImage扩展来创建视图的图像,然后有一些代码在图像中寻找黑色矩形。使用在堆栈溢出时在此处找到的像素比较代码。

情节提要上的Weedpaper视图控制器具有Weedpaper标题,“用于Apple iPhone”文本,iPhone图像,我要正确放置在iPhone图像上方的UIView,“已安装Weedpaper的数量”文字和底部的一行自动调整大小按钮。

  1. 首先使用故事板约束来定位矩形,这似乎很困难,似乎可以使其在故事板中使用,但不能在设备或模拟器上使用。
  2. 尝试对在设备上似乎可以工作但在模拟器上不工作的位置进行硬编码,反之亦然,需要花费很长时间进行测试,显然这不是正确的方法。
  3. 接下来修改了png文件,在iPhone图像文件中放置了一个黑色(255,255,255)矩形,然后在代码中将该视图复制到UIImage,并尝试查找表示该帧的黑色矩形像素。一旦找到像素,则应该能够使用这些坐标定位视图。但是我从设备和模拟器中获得了不同的屏幕快照位图。
  4. 还尝试使用AVMakeRect(aspectRatio:CGSize(width:w,height:h),insideRect:self.uiImageView.bounds)无济于事。

需要UIView定位在系统正在跨设备和模拟器缩放的图像上方。

override func viewDidAppear(_ animated: Bool) {

        if (UIDevice.modelName.contains("Simulator"))
        {
            deviceName = ProcessInfo.init().environment["SIMULATOR_DEVICE_NAME"] ?? "NoN"
            bSimulator = true
        }
        else
        {
            deviceName = UIDevice.modelName
            bSimulator = false
        }

        print("deviceName:", deviceName)
        print("simulator:", bSimulator)

        var frame:CGRect!

        if bSimulator{
            frame = self.getImageRectForSimulators()
        }
        else
        {
            frame = self.getImageRectForDevices()
        }

        self.uiViewPhotos.frame = frame
        self.uiViewPhotos.isHidden = false
    }

    func getImageRectForDevices() -> CGRect
    {

        // Write the view to an image so we can get the positioning rectangle
        // Positioning Rectangle is a black rectangle in the image
        // it has the only true black pixels in the image

        let imgView:UIImageView = self.uiImageView
        let img:UIImage = self.uiImageView.asImage()


        // Write to the photo album for testing
        //UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil)

        var pixelData = img.cgImage!.dataProvider!.data
        var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)

        let maxX = img.size.width
        let maxY = img.size.height
        let halfwayX = maxX / 2
        let halfwayY = maxY / 2

        let screenScale = UIScreen.main.scale
        let scaleFactorX = img.scale
        let scaleFactorY = img.scale * screenScale
        let screenFactor = UIScreen.main.bounds.width/UIScreen.main.bounds.height
        let imgViewFactor = self.uiImageView.frame.width/self.uiImageView.frame.height

        var pnt:CGPoint = CGPoint(x: -1, y: -1)

        var pixelInfo: Int = -1
        var r:CGFloat!, g:CGFloat!
        var b:CGFloat!, a:CGFloat!
        var uiColor:UIColor!

        var v1:CGFloat!, v2:CGFloat!
        var v3:CGFloat!, v4:CGFloat!

        var newRect:CGRect!

        // Find this color in the image to locate the black pixel frame
        // use that to size the view accordingly
        // Seems to change among devices so round use black color

        let uiColor_phoneFramePixelColor = UIColor(red:0.0, green:0.0, blue:0.0, alpha:1.0)

        // Device code
        for i in stride(from: halfwayX*scaleFactorX, to: 0, by: -1)
        {
            pnt.x = i
            pnt.y = halfwayY*scaleFactorY

            pixelInfo = ((Int(img.size.width) * Int(pnt.y)) + Int(pnt.x)) * 4
            r = CGFloat(data[pixelInfo])/CGFloat(255.0)
            g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
            b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
            a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)
            uiColor = UIColor(red: r, green: g, blue: b, alpha: a)
            print("searching for i black pixel at i, y:", i, pnt.y, 255.0*r, 255.0*g, 255.0*b, a)
            if (uiColor == uiColor_phoneFramePixelColor)
            {
                v1 = i
                print("found i pixel at:", i)
                break
            }
        }
        print(" ")

        // find top y pixel
        // Device code
        for j in stride(from: halfwayY*scaleFactorY, to: 0, by: -1)
        {
            pnt.x = halfwayX*scaleFactorX
            pnt.y = j

            pixelInfo = ((Int(img.size.width) * Int(pnt.y)) + Int(pnt.x)) * 4
            r = CGFloat(data[pixelInfo])/CGFloat(255.0)
            g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
            b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
            a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)
            uiColor = UIColor(red: r, green: g, blue: b, alpha: a)
            print("searching for j black pixel at j, x:", j, pnt.x, 255.0*r, 255.0*g, 255.0*b, a)
            if (uiColor == uiColor_phoneFramePixelColor)
            {
                v2 = j
                print("found j pixel at:", j)
                break
            }
        }
        print(" ")

        // Find bottom x pixel
        // Device code
        for k in stride(from: halfwayX*scaleFactorX, to: maxX*scaleFactorX, by: 1)
        {
            pnt.x = k
            pnt.y = halfwayY

            pixelInfo = ((Int(img.size.width) * Int(pnt.y)) + Int(pnt.x)) * 4

            r = CGFloat(data[pixelInfo])/CGFloat(255.0)
            g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
            b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
            a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)
            uiColor = UIColor(red: r, green: g, blue: b, alpha: a)
            print("searching for k black pixel at k, y:", k, pnt.y, 255.0*r, 255.0*g, 255.0*b, a)
            if (uiColor == uiColor_phoneFramePixelColor)
            {
                v3 = k
                print("found bottom k pixel at:", k)
                break
            }
        }
        print(" ")

        // Find bottom y pixel
        // Device code
        for l in stride(from: halfwayY*scaleFactorY, to: maxY*scaleFactorY, by: 1)
        {
            pnt.x = halfwayX
            pnt.y = l

            pixelInfo = ((Int(img.size.width) * Int(pnt.y)) + Int(pnt.x)) * 4

            r = CGFloat(data[pixelInfo])/CGFloat(255.0)
            g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
            b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
            a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)

            uiColor = UIColor(red: r, green: g, blue: b, alpha: a)
            print("searching for l black pixel at l, x:", l, pnt.x, 255.0*r, 255.0*g, 255.0*b, a)

            if (uiColor == uiColor_phoneFramePixelColor)
            {
                v4 = l
                print("found bottom l pixel at:", l)
                break
            }
        }
        print(" ")

        // this is the Black Rectangle from the bitmap of the orginal image
        let w = (v3 - v1)
        let h = (v4 - v2)
        newRect = CGRect(x: v1/scaleFactorX, y: v2/scaleFactorY, width: w/scaleFactorX, height: h/scaleFactorY)

        print("calculated rectangle:", newRect)

        return newRect
    }

extension UIView {
    func asImage()-> UIImage
    {
        // Get an image of the view. Apple discourages using UIGraphicsBeginImageContext
        // Starting with IOS 10 UIGraphicsBeginImageContext is sRBG only and 32 bit only.
        // Use UIGraphicsImageRenderer

        if #available(iOS 10.0, *) {
            let renderer = UIGraphicsImageRenderer(bounds: bounds)
            let renderFormat = UIGraphicsImageRendererFormat.default()
            renderFormat.opaque = false
            let renderedImage = renderer.image {
                rendererContext in
                layer.render(in: rendererContext.cgContext)
            }
            return renderedImage
        }
        else{
            UIGraphicsBeginImageContext(self.frame.size)
            self.layer.render(in: UIGraphicsGetCurrentContext()!)
            let image = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()
            return UIImage(cgImage: image!.cgImage!)
        }
     }
}

2 个答案:

答案 0 :(得分:0)

对于UIImage,请确保将其固定在边缘(0)上。缩放图像取决于图像的大小和尺寸。查看最有效的方法。 Scale To Fill可以工作。

对于UIView,您可能需要尝试各种NSLayoutConstraints,这些NSLayoutConstraint会根据不同的屏幕尺寸进行激活和停用。 activate()有一个名为NSLayoutConstraint.activate([ vw.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor, constant: 20), vw.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor, constant: -20), vw.heightAnchor.constraint(equalToConstant: 100), vw.centerYAnchor.constraint(equalTo: view.safeAreaLayoutGuide. 的类方法,该方法可一次激活多个约束,这应允许“自动布局”同时更新其整个布局。例如:

NSLayoutConstraint.deactivate([
vw.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor, constant: 20),
vw.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor, constant: -20),
vw.heightAnchor.constraint(equalToConstant: 100),
vw.centerYAnchor.constraint(equalTo: view.safeAreaLayoutGuide.centerYAnchor)

请记住,这些约束也可以停用:

predictor_path = "C:\\Users\\G7K4\\Desktop\\FinalFaceSwap\\shape_predictor_68_face_landmarks.dat"
filepath1 =  "C:\\Users\\G7K4\\Desktop\\FinalFaceSwap\\Image\\nil.jpg"

image1 = cv2.imread(filepath1)


detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path) 
dets1 = detector(image1)

for k, d in enumerate(dets1):
shape = predictor(img1, d)
#Detect 68 facial landmark points
vec = np.empty([68, 2], dtype = int)
for b in range(68):
    vec[b][0] = shape.part(b).x
    vec[b][1] = shape.part(b).y

#write the detected file in text file
with open("Model1.txt","w") as file:
    for i in range(len(vec)):
        outer=""
        outer += str(vec[i])
        file.write(outer)
        file.write("\n")

#read the text file and remove the brackets
with open("Model1.txt","r") as my_file:
    text=my_file.read()
    text= text.replace("[","")
    text= text.replace("]","")

#again write the file. 
with open("Model1.txt","w") as file:
    file.write(text)

#function for reading points from text file
def readPoints(path) :
    # Create an array of points.
    points = [];

    # Read points
    with open(path) as file :
        for line in file :
            x, y = line.split()
            points.append((int(x), int(y)))
    return points

])

答案 1 :(得分:0)

最后使它正常工作。似乎可以在模拟器和我拥有的设备上工作:我正在使用的像素比较代码无法正常工作,下面的链接可帮助我弄清楚:Why do I get the wrong color of a pixel with following code?。这对我有用。 iPhone图像(未显示)是具有黑色矩形的iPhone图像,您希望在屏幕上将其放置在该位置,并将其设置为Aspect Fit。

private func getiPhoneScreenRect() -> (rect: CGRect, bOk: Bool){

// Write the view to an image so we can get the positioning rectangle
// Positioning Rectangle is a black rectangle in the image
// it has the only true black pixels in the image
// but the pixels may not be true black when we look at them so loosen
// equality criteria

let imgView:UIImageView = self.uiImageView
let img:UIImage = self.uiImageView.screenShotViaRenderImage()

let pixelData = img.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = (img.cgImage?.bitsPerPixel)!
let bytesPerRow = img.cgImage!.bytesPerRow

let maxX = img.size.width
let maxY = img.size.height
let halfwayX = maxX / 2
let halfwayY = maxY / 2

let imgScale = img.scale

var pnt:CGPoint = CGPoint(x: -1, y: -1)

var pixelInfo: Int = -1
var r:CGFloat!, g:CGFloat!
var b:CGFloat!, a:CGFloat!

var v1:CGFloat = 0.0, v2:CGFloat = 0.0
var v3:CGFloat = 0.0, v4:CGFloat = 0.0

var newRect:CGRect!

// Find the black border in the image byfinding the black pixel frame
// use that to size the view accordingly
// Seems to change among devices so dont check for pure black

// From center towards left edge find black pixel

for i in stride(from: halfwayX*imgScale, to: 0, by: -1)
{
    pnt.x = i
    pnt.y = halfwayY*imgScale

    pixelInfo = Int(pnt.y) * bytesPerRow + Int(pnt.x) * 4
    r = CGFloat(data[pixelInfo])/CGFloat(255.0)
    g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
    b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
    a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)

    // No true black in image so get close
    if (r*255.0 <= 3.0 && g*255.0 <= 3.0)
    {
        v1 = i
        break
    }
}

// From center towards top find top y pixel

for j in stride(from: halfwayY*imgScale, to: 0, by: -1)
{
    pnt.x = halfwayX*imgScale
    pnt.y = j

    pixelInfo = Int(pnt.y) * bytesPerRow + Int(pnt.x) * 4
    r = CGFloat(data[pixelInfo])/CGFloat(255.0)
    g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
    b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
    a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)

    if (r*255.0 <= 3.0 && g*255.0 <= 3.0)
    {
        v2 = j
        break
    }
}

// From center towards bottom Find bottom x pixel

for k in stride(from:halfwayX*imgScale, to: maxX*imgScale-1, by: 1)
{
    pnt.x = k
    pnt.y = halfwayY*imgScale

    pixelInfo = Int(pnt.y) * bytesPerRow + Int(pnt.x) * 4
    r = CGFloat(data[pixelInfo])/CGFloat(255.0)
    g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
    b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
    a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)

    if (r*255.0 <= 3.0 && g*255.0 <= 3.0)
    {
        v3 = k
        break
    }
}

// Find center towards right edge find bottom y pixel

for l in stride(from: halfwayY*imgScale, to: (maxY*imgScale)-1, by: 1)
{
    pnt.x = halfwayX*imgScale
    pnt.y = l

    pixelInfo = Int(pnt.y) * bytesPerRow + Int(pnt.x) * 4
    r = CGFloat(data[pixelInfo])/CGFloat(255.0)
    g = CGFloat(data[pixelInfo+1])/CGFloat(255.0)
    b = CGFloat(data[pixelInfo+2])/CGFloat(255.0)
    a = CGFloat(data[pixelInfo+3])/CGFloat(255.0)

    if (r*255.0 <= 3.0 && g*255.0 <= 3.0)
    {
        v4 = l
        break
    }
}

// If did not find rectangle return bOk false

if (v1 <= 0.0 || v2 <= 0.0 || v3 <= 0.0 || v4 <= 0.0)
    || v3 >= (maxX*imgScale)-1 || v4 >= (maxY*imgScale)-1
{
    return (newRect, false)
}

let w = (v3 - v1)
let h = (v4 - v2)

// this is the Black Rectangle from the bitmap of screenshot of the view
// inset the frame by 1 pixel

newRect = CGRect(x: (v1+2)/imgScale, y: (v2+2)/imgScale, width: (w-2)/imgScale, height: (h-2)/imgScale)

return (newRect, true)

}

获得这样的屏幕截图

extension UIView {
func screenShotViaRenderImage()-> UIImage
{
    // Get an image of the view. Apple discourages using UIGraphicsBeginImageContext
    // Starting with IOS 10 UIGraphicsBeginImageContext is sRBG only and 32 bit only.
    // Use UIGraphicsImageRenderer

    if #available(iOS 10.0, *) {
        let rendererFormat = UIGraphicsImageRendererFormat.default()
        rendererFormat.opaque = false
        let renderer = UIGraphicsImageRenderer(bounds: bounds, format: rendererFormat)
        let screenShot = renderer.image {
            rendererContext in
            layer.render(in: rendererContext.cgContext)
        }
        return screenShot
    }
    else{
        UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
        self.layer.render(in: UIGraphicsGetCurrentContext()!)
        let screenShot = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        return UIImage(cgImage: screenShot!.cgImage!)
    }
}

}

然后这样称呼

    override func viewWillLayoutSubviews() {

    // Different images depending on where we call it from viewDidAppear, viewWillAppear
    // This will make the screen size into the black rectangle in the phone image
    // the image looses detail so the black frame may not be totally black
    //

    let iPhoneScreenFrame = self.getiPhoneScreenRect()
    if (iPhoneScreenFrame.bOk)
    {
        self.someScreen.frame = iPhoneScreenFrame.rect
    }
}