我正在尝试根据使用Obj-C(How can I reduce the file size of a video created with UIImagePickerController?)的SO帖子在swift中设置自定义视频压缩。但是,我在转换语法时遇到了一些问题,特别是上面的错误在字典上突出显示。压缩功能如下:
func convertVideoToLowQuailty(withInputURL inputURL: URL, outputURL: URL) {
//setup video writer
var videoAsset = AVURLAsset(url: inputURL, options: nil)
var videoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
var videoSize = videoTrack.naturalSize
var videoWriterCompressionSettings = [
AVVideoAverageBitRateKey : Int(1250000)
]
var videoWriterSettings : NSDictionary = [
DictionaryLiteral : (Key: AVVideoCodecKey, Object: AVVideoCodecH264),
AVVideoCompressionPropertiesKey : videoWriterCompressionSettings,
AVVideoWidthKey : Int(videoSize.width),
AVVideoHeightKey : Int(videoSize.height)
]
var videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoWriterSettings as! [String : Any?])
videoWriterInput.expectsMediaDataInRealTime = true
videoWriterInput.transform = videoTrack.preferredTransform
var videoWriter = try! AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4)
videoWriter.add(videoWriterInput)
//setup video reader
var videoReaderSettings = [ (kCVPixelBufferPixelFormatTypeKey as String) : Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) ]
var videoReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
var videoReader = try! AVAssetReader(asset: videoAsset)
videoReader.add(videoReaderOutput)
//setup audio writer
var audioWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: nil)
audioWriterInput.expectsMediaDataInRealTime = false
videoWriter.add(audioWriterInput)
//setup audio reader
var audioTrack = videoAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
var audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
var audioReader = try! AVAssetReader(asset: videoAsset)
audioReader.add(audioReaderOutput)
videoWriter.startWriting()
//start writing from video reader
videoReader.startReading()
videoWriter.startSession(atSourceTime: kCMTimeZero)
var processingQueue = DispatchQueue(label: "processingQueue1")
videoWriterInput.requestMediaDataWhenReady(on: processingQueue, using: {() -> Void in
while videoWriterInput.isReadyForMoreMediaData {
var sampleBuffer: CMSampleBuffer
if videoReader.status == .reading && (sampleBuffer == videoReaderOutput.copyNextSampleBuffer()!) {
videoWriterInput.append(sampleBuffer)
}
else {
videoWriterInput.markAsFinished()
if videoReader.status == .completed {
//start writing from audio reader
audioReader.startReading()
videoWriter.startSession(atSourceTime: kCMTimeZero)
var processingQueue = DispatchQueue(label: "processingQueue2")
audioWriterInput.requestMediaDataWhenReady(on: processingQueue, using: {() -> Void in
while audioWriterInput.isReadyForMoreMediaData {
var sampleBuffer: CMSampleBuffer
if audioReader.status == .reading && (sampleBuffer == (audioReaderOutput.copyNextSampleBuffer()!)) {
audioWriterInput.append(sampleBuffer)
}
else {
audioWriterInput.markAsFinished()
if audioReader.status == .completed {
videoWriter.finishWriting(completionHandler: {() -> Void in
self.sendMovieFile(at: outputURL)
})
}
}
}
})
}
}
}
})
}
答案 0 :(得分:0)
我不明白为什么你需要这一行:
]
看到链接的线程,你可以这样写:
DictionaryLiteral : (Key: AVVideoCodecKey, Object: AVVideoCodecH264),
(有人比 var videoWriterCompressionSettings: [String: AnyObject] = [
AVVideoAverageBitRateKey : 1250000 as NSNumber
]
var videoWriterSettings : [String: AnyObject] = [
AVVideoCodecKey: AVVideoCodecH264 as NSString,
AVVideoCompressionPropertiesKey : videoWriterCompressionSettings as NSDictionary,
AVVideoWidthKey : videoSize.width as NSNumber,
AVVideoHeightKey : videoSize.height as NSNumber
]
var videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoWriterSettings)
更喜欢[String: Any]
,说它在Swift 3中更加Swifty。使用[String: AnyObject]
,你可以移除一些石头,但可能会错误地包含一些错误只会在运行时显示的东西。)
代码的另一个非常糟糕的部分是Any
。您需要将as! [String : Any?]
传递给[String: Any]?
,而不是AVAssetWriterInput.init(mediaType:outputSettings:)
。
(可能还有其他一些不好的部分,我没有检查过......)