视频swift中的图像/文本叠加

时间:2015-09-08 06:07:30

标签: ios swift avfoundation

我正在使用swift使用图像叠加来处理视频中的水印效果。我正在使用result,但不知怎的,我没有成功。

以下是我的叠加图片/文字代码

<div class="col-md-4">
    Result
    <div class="radio">
        <label>
            <input ng-model="result" type="radio" name="rdoResult" ng-value="'pass'">
              pass
        </label>
    </div>
    <div class="radio">
        <label>
            <input ng-model="result" type="radio" name="rdoResult" ng-value="'fail'">
              fail
        </label>
    </div>
</div>

通过这段代码,我没有实现覆盖......我不知道我做错了什么......

问题:

  • 此代码中是否有任何遗漏?或者这段代码有什么问题吗?
  • 此代码仅适用于录制的视频或所有视频,包括来自图库的视频吗?

5 个答案:

答案 0 :(得分:7)

@El Captain提供的代码可行。它只是缺失:

var cart = [
  {"id":1, "number":1, "other_attributes":"other_values"},
  {"id":2, "number":2, "other_attributes":"other_values"},
  {"id":"processing", "other_attributes":"other_values"},
  {"id":2, "number":1, "other_attributes":"other_values"},
  {"id":"deposit", "other_attributes":"other_values"}
];

function makeComparator(identifier) {
  return function compare(a,b) {
    a = a[identifier];
    b = b[identifier];

    if (Number.isInteger(a) && Number.isInteger(b) ) {
      return a > b;
    }

    if (Number.isInteger(a)) {
      return -1;
    }

    if (Number.isInteger(b)) {
      return 1;
    }

    return a > b;
  }
}

var idComparator = makeComparator("id");
var numberComparator = makeComparator("number");

cart.sort(idComparator);
cart.sort(numberComparator);
console.log(cart);
document.write('<pre>' + JSON.stringify(cart, null, 3) + '</pre>');

您可以在 AVAssetExportSession

实例化后立即添加此项

注意:最初提供的代码只会导出视频轨道而不会导出音频轨道。如果您需要音轨,可以在配置 compositionvideoTrack 后添加类似的内容:

    assetExport.videoComposition = layercomposition

答案 1 :(得分:0)

对我(我在代码中看到的内容),您没有将parentlayer添加到屏幕上。

您创建了一个CALayer()以将videolayerimglayertitleLayer添加到新图层中,但您不会在屏幕上添加最后一个。

yourView.layer.addSublayer(parentlayer)

希望这能帮到你

答案 2 :(得分:0)

@Rey Hernandez这对我帮助很大!如果有人想要进一步澄清如何在视频中添加音频资产,请使用组合它们的代码

    let vtrack =  vidAsset.tracksWithMediaType(AVMediaTypeVideo)
    let videoTrack:AVAssetTrack = vtrack[0] 
    let vid_duration = videoTrack.timeRange.duration
    let vid_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)

    let atrack =  vidAsset.tracksWithMediaType(AVMediaTypeAudio)
    let audioTrack:AVAssetTrack = atrack[0]
    let audio_duration = audioTrack.timeRange.duration
    let audio_timerange = CMTimeRangeMake(kCMTimeZero, vidAsset.duration)

    do {
        let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

        try compositionvideoTrack.insertTimeRange(vid_timerange, ofTrack: videoTrack, atTime: kCMTimeZero)

        compositionvideoTrack.preferredTransform = videoTrack.preferredTransform



        let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
        try! compositionAudioTrack.insertTimeRange(audio_timerange, ofTrack: audioTrack, atTime: kCMTimeZero)

        compositionvideoTrack.preferredTransform = audioTrack.preferredTransform

    } catch {
        print(error)
    }

答案 3 :(得分:0)

作为补充,这是一个函数,该函数基于通过复制UITextViews的旋转,缩放和字体而提供的数组UITextViews创建CATextLayers。只需将它们添加到提供给AVVideoCompositionCoreAnimationTool的容器层即可:

export function Serve(done) {
    browserSync.init({
        server: {
            baseDir: path.build,
            index: 'index.html'
        },
        ghostMode: true,
        notify: false,
        host: '0.0.0.0',
        port: 9000
    });

    done();
};

使用以下CATextLayer子类将其合并以使文本垂直居中:

private static func createTextLayer(totalSize: CGSize,
                                        textView: UITextView) -> CATextLayer {
        let textLayer: CACenteredTextLayer = CACenteredTextLayer()
        textLayer.backgroundColor = UIColor.clear
        textLayer.foregroundColor = textView.textColor?.cgColor
        textLayer.masksToBounds = false
        textLayer.isWrapped = true

        let scale: CGFloat = UIScreen.main.scale

        if let font: UIFont = textView.font {
            let upscaledFont: UIFont = font.withSize(font.pointSize * scale)
            let attributedString = NSAttributedString(
                string: textView.text,
                attributes: [NSAttributedString.Key.font: upscaledFont,
                             NSAttributedString.Key.foregroundColor: textView.textColor ?? UIColor.white])
            textLayer.string = attributedString
        }

        // Set text alignment
        let alignment: CATextLayerAlignmentMode
        switch textView.textAlignment {
        case NSTextAlignment.left:
            alignment = CATextLayerAlignmentMode.left
        case NSTextAlignment.center:
            alignment = CATextLayerAlignmentMode.center
        default:
            alignment = CATextLayerAlignmentMode.right
        }
        textLayer.alignmentMode = alignment

        let originalFrame: CGRect = textView.frame

        // Also take scale into consideration
        let targetSize: CGSize = CGSize(width: originalFrame.width * scale,
                                        height: originalFrame.height * scale)

        // The CALayer positioning is inverted on the Y-axes, so apply this
        let origin: CGPoint = CGPoint(x: originalFrame.origin.x * scale,
                                      y: (totalSize.height - (originalFrame.origin.y * scale)) - targetSize.height)

        textLayer.frame = CGRect(x: origin.x,
                                 y: origin.y,
                                 width: targetSize.width,
                                 height: targetSize.height)

        // Determine the scale
        textLayer.anchorPoint = CGPoint(x: 0.5,
                                        y: 0.5)

        var newTransform: CATransform3D = CATransform3DMakeScale(textView.transform.xScale,
                                                                 textView.transform.yScale,
                                                                 0)

        // Convert to degrees, invert the amount and convert back to radians to apply
        newTransform = CATransform3DRotate(newTransform,
                                           textView.transform.radiansFor3DTransform,
                                           0,
                                           0,
                                           1)
        textLayer.transform = newTransform

        return textLayer
}

答案 4 :(得分:0)

这是在Swift 4中运行的更新:

import UIKit
import AVFoundation
import AVKit
import Photos

class ViewController: UIViewController {

var myurl: URL?

override func viewDidLoad() {
    super.viewDidLoad()
    // Do any additional setup after loading the view, typically from a nib.

}

@IBAction func saveVideoTapper(_ sender: Any) {

    let path = Bundle.main.path(forResource: "sample_video", ofType:"mp4")
    let fileURL = NSURL(fileURLWithPath: path!)

    let composition = AVMutableComposition()
    let vidAsset = AVURLAsset(url: fileURL as URL, options: nil)

    // get video track
    let vtrack =  vidAsset.tracks(withMediaType: AVMediaType.video)
    let videoTrack: AVAssetTrack = vtrack[0]
    let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: vidAsset.duration)

    let tr: CMTimeRange = CMTimeRange(start: CMTime.zero, duration: CMTime(seconds: 10.0, preferredTimescale: 600))
    composition.insertEmptyTimeRange(tr)

    let trackID:CMPersistentTrackID = CMPersistentTrackID(kCMPersistentTrackID_Invalid)

    if let compositionvideoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: trackID) {

        do {
            try compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
        } catch {
            print("error")
        }

        compositionvideoTrack.preferredTransform = videoTrack.preferredTransform

    } else {
        print("unable to add video track")
        return
    }


    // Watermark Effect
    let size = videoTrack.naturalSize

    let imglogo = UIImage(named: "image.png")
    let imglayer = CALayer()
    imglayer.contents = imglogo?.cgImage
    imglayer.frame = CGRect(x: 5, y: 5, width: 100, height: 100)
    imglayer.opacity = 0.6

    // create text Layer
    let titleLayer = CATextLayer()
    titleLayer.backgroundColor = UIColor.white.cgColor
    titleLayer.string = "Dummy text"
    titleLayer.font = UIFont(name: "Helvetica", size: 28)
    titleLayer.shadowOpacity = 0.5
    titleLayer.alignmentMode = CATextLayerAlignmentMode.center
    titleLayer.frame = CGRect(x: 0, y: 50, width: size.width, height: size.height / 6)


    let videolayer = CALayer()
    videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)

    let parentlayer = CALayer()
    parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    parentlayer.addSublayer(videolayer)
    parentlayer.addSublayer(imglayer)
    parentlayer.addSublayer(titleLayer)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
    layercomposition.renderSize = size
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

    // instruction for watermark
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
    let videotrack = composition.tracks(withMediaType: AVMediaType.video)[0] as AVAssetTrack
    let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
    instruction.layerInstructions = NSArray(object: layerinstruction) as [AnyObject] as! [AVVideoCompositionLayerInstruction]
    layercomposition.instructions = NSArray(object: instruction) as [AnyObject] as! [AVVideoCompositionInstructionProtocol]

    //  create new file to receive data
    let dirPaths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
    let docsDir = dirPaths[0] as NSString
    let movieFilePath = docsDir.appendingPathComponent("result.mov")
    let movieDestinationUrl = NSURL(fileURLWithPath: movieFilePath)

    // use AVAssetExportSession to export video
    let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)
    assetExport?.outputFileType = AVFileType.mov
    assetExport?.videoComposition = layercomposition

    // Check exist and remove old file
    FileManager.default.removeItemIfExisted(movieDestinationUrl as URL)

    assetExport?.outputURL = movieDestinationUrl as URL
    assetExport?.exportAsynchronously(completionHandler: {
        switch assetExport!.status {
        case AVAssetExportSession.Status.failed:
            print("failed")
            print(assetExport?.error ?? "unknown error")
        case AVAssetExportSession.Status.cancelled:
            print("cancelled")
            print(assetExport?.error ?? "unknown error")
        default:
            print("Movie complete")

            self.myurl = movieDestinationUrl as URL

            PHPhotoLibrary.shared().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: movieDestinationUrl as URL)
            }) { saved, error in
                if saved {
                    print("Saved")
                }
            }

            self.playVideo()

        }
    })

}


func playVideo() {
    let player = AVPlayer(url: myurl!)
    let playerLayer = AVPlayerLayer(player: player)
    playerLayer.frame = self.view.bounds
    self.view.layer.addSublayer(playerLayer)
    player.play()
    print("playing...")
}



}


extension FileManager {
func removeItemIfExisted(_ url:URL) -> Void {
    if FileManager.default.fileExists(atPath: url.path) {
        do {
            try FileManager.default.removeItem(atPath: url.path)
        }
        catch {
            print("Failed to delete file")
        }
    }
}
}