我已经创建了下面的合并方法(加入,联合,我不确定哪个是正确的词,我想从2个或更多而不是一个接一个地制作1个音频但是一次播放每个音频) 。作为输入,我有.wav格式的多个音频文件,我想要输出1 .wav格式。
_previous = dataTab
我的问题是它不起作用,我不知道为什么。我得到的错误:
错误域= AVFoundationErrorDomain代码= -11838“操作已停止” UserInfo = {NSLocalizedDescription =操作已停止, NSLocalizedFailureReason =不支持此操作 介质。})
当我将预设和输出类型更改为.m4a时,它正在工作,但我需要.wav。当我输入相同的格式时,应该使用.wav吗?谢谢你的帮助
答案 0 :(得分:1)
如果您想将一个音频文件混合或重叠到另一个音频文件,那么您应该编写此代码,但是您只能生成.m4a文件而不是.wav文件。我使用.mp3文件作为输入文件。
在swift3中:
例如:
func playmerge(audio1: NSURL, audio2: NSURL)
{
let composition = AVMutableComposition()
let compositionAudioTrack1:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
let compositionAudioTrack2:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
let documentDirectoryURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! as NSURL
self.fileDestinationUrl = documentDirectoryURL.appendingPathComponent("resultmerge.m4a")! as URL
let filemanager = FileManager.default
if (!filemanager.fileExists(atPath: self.fileDestinationUrl.path))
{
do
{
try filemanager.removeItem(at: self.fileDestinationUrl)
}
catch let error as NSError
{
NSLog("Error: \(error)")
}
if (theError == nil)
{
print("The music files has been Removed.")
}
else
{
print("Error")
}
}
else
{
do
{
try filemanager.removeItem(at: self.fileDestinationUrl)
}
catch let error as NSError
{
NSLog("Error: \(error)")
}
if (theError == nil)
{
print("The music files has been Removed.")
}
else
{
print("Error")
}
}
let url1 = audio1
let url2 = audio2
let avAsset1 = AVURLAsset(url: url1 as URL, options: nil)
let avAsset2 = AVURLAsset(url: url2 as URL, options: nil)
var tracks1 = avAsset1.tracks(withMediaType: AVMediaTypeAudio)
var tracks2 = avAsset2.tracks(withMediaType: AVMediaTypeAudio)
let assetTrack1:AVAssetTrack = tracks1[0]
let assetTrack2:AVAssetTrack = tracks2[0]
let duration1: CMTime = assetTrack1.timeRange.duration
let duration2: CMTime = assetTrack2.timeRange.duration
let timeRange1 = CMTimeRangeMake(kCMTimeZero, duration1)
let timeRange2 = CMTimeRangeMake(kCMTimeZero, duration2)
do
{
try compositionAudioTrack1.insertTimeRange(timeRange1, of: assetTrack1, at: kCMTimeZero)
try compositionAudioTrack2.insertTimeRange(timeRange2, of: assetTrack2, at: kCMTimeZero)
}
catch
{
print(error)
}
let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetAppleM4A)
assetExport?.outputFileType = AVFileTypeAppleM4A
assetExport?.outputURL = fileDestinationUrl
assetExport?.exportAsynchronously(completionHandler:
{
switch assetExport!.status
{
case AVAssetExportSessionStatus.failed:
print("failed \(assetExport?.error)")
case AVAssetExportSessionStatus.cancelled:
print("cancelled \(assetExport?.error)")
case AVAssetExportSessionStatus.unknown:
print("unknown\(assetExport?.error)")
case AVAssetExportSessionStatus.waiting:
print("waiting\(assetExport?.error)")
case AVAssetExportSessionStatus.exporting:
print("exporting\(assetExport?.error)")
default:
print("complete")
}
do
{
self.player = try AVAudioPlayer(contentsOf: self.fileDestinationUrl)
self.player?.numberOfLoops = 0
self.player?.prepareToPlay()
self.player?.volume = 1.0
self.player?.play()
self.player?.delegate=self
}
catch let error as NSError
{
print(error)
}
})
}
答案 1 :(得分:0)
请参阅this question,看起来这是自iOS 7以来的一个突出的错误。不幸的是,DTS提出的提交错误的建议似乎仍然适用。
您可以尝试使用AVAssetWriter导出,沿着代码行导出:Converting CAF to WAV。