[编辑:我能够找到解决方法,见下文。]
我正在尝试从S3流式传输多个远程MP4剪辑,将它们按顺序播放为一个连续视频(以便在剪辑内和剪辑之间进行清理),不会出现口吃,而无需先将其明确下载到设备中。但是,我发现剪辑缓冲非常缓慢(即使在快速的网络连接上),并且无法找到解决该问题的适当方法。
我一直在尝试使用AVPlayer
,因为AVPlayer
与AVMutableComposition
一起播放提供的视频曲目作为一个连续曲目(与AVQueuePlayer
不同,我收集分别播放每个视频,因此不支持剪辑之间的连续擦洗。)
当我将其中一个资产直接粘贴到AVPlayerItem
并播放(没有AVMutableComposition
)时,它会快速缓冲。但是使用AVMutableComposition
,视频会在第二个剪辑上开始非常糟糕(我的测试用例有6个剪辑,每个剪辑大约6秒),而音频一直在继续。在它播放一次后,如果我倒回到开头,它会非常流畅地播放,所以我认为问题在于缓冲。
我目前解决这个问题的尝试感觉很复杂,因为这似乎是AVPlayer
的一个相当基本的用例 - 我希望有一个更简单的解决方案来解决所有这些问题。不知怎的,我怀疑我在下面使用的缓冲播放器是非常必要的,但我的想法已经用完了。
以下是设置AVMutableComposition
的主要代码:
// Build an AVAsset for each of the source URIs
- (void)prepareAssetsForSources:(NSArray *)sources
{
NSMutableArray *assets = [[NSMutableArray alloc] init]; // the assets to be used in the AVMutableComposition
NSMutableArray *offsets = [[NSMutableArray alloc] init]; // for tracking buffering progress
CMTime currentOffset = kCMTimeZero;
for (NSDictionary* source in sources) {
bool isNetwork = [RCTConvert BOOL:[source objectForKey:@"isNetwork"]];
bool isAsset = [RCTConvert BOOL:[source objectForKey:@"isAsset"]];
NSString *uri = [source objectForKey:@"uri"];
NSString *type = [source objectForKey:@"type"];
NSURL *url = isNetwork ?
[NSURL URLWithString:uri] :
[[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:uri ofType:type]];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
currentOffset = CMTimeAdd(currentOffset, asset.duration);
[assets addObject:asset];
[offsets addObject:[NSNumber numberWithFloat:CMTimeGetSeconds(currentOffset)]];
}
_clipAssets = assets;
_clipEndOffsets = offsets;
}
// Called with _clipAssets
- (AVPlayerItem*)playerItemForAssets:(NSMutableArray *)assets
{
AVMutableComposition* composition = [AVMutableComposition composition];
for (AVAsset* asset in assets) {
CMTimeRange editRange = CMTimeRangeMake(CMTimeMake(0, 600), asset.duration);
NSError *editError;
[composition insertTimeRange:editRange
ofAsset:asset
atTime:composition.duration
error:&editError];
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
return playerItem; // this is used to initialize the main player
}
我最初的想法是:由于它使用香草AVPlayerItem
快速缓冲,为什么不维护一个单独的缓冲播放器,依次加载每个资产(没有AVMutableComposition
)来缓冲资产主要球员?
- (void)startBufferingClips
{
_bufferingPlayerItem = [AVPlayerItem playerItemWithAsset:_clipAssets[0]
automaticallyLoadedAssetKeys:@[@"tracks"]];
_bufferingPlayer = [AVPlayer playerWithPlayerItem:_bufferingPlayerItem];
_currentlyBufferingIndex = 0;
}
// called every 250 msecs via an addPeriodicTimeObserverForInterval on the main player
- (void)updateBufferingProgress
{
// If the playable (loaded) range is within 100 milliseconds of the clip
// currently being buffered, load the next clip into the buffering player.
float playableDuration = [[self calculateBufferedDuration] floatValue];
CMTime totalDurationTime = [self playerItemDuration :_bufferingPlayer];
Float64 totalDurationSeconds = CMTimeGetSeconds(totalDurationTime);
bool bufferingComplete = totalDurationSeconds - playableDuration < 0.1;
float bufferedSeconds = [self bufferedSeconds :playableDuration];
float playerTimeSeconds = CMTimeGetSeconds([_player currentTime]);
__block NSUInteger playingClipIndex = 0;
// find the index of _player's currently playing clip
[_clipEndOffsets enumerateObjectsUsingBlock:^(id offset, NSUInteger idx, BOOL *stop) {
if (playerTimeSeconds < [offset floatValue]) {
playingClipIndex = idx;
*stop = YES;
}
}];
// TODO: if bufferedSeconds - playerTimeSeconds <= 0, pause the main player
if (bufferingComplete && _currentlyBufferingIndex < [_clipAssets count] - 1) {
// We're done buffering this clip, load the buffering player with the next asset
_currentlyBufferingIndex += 1;
_bufferingPlayerItem = [AVPlayerItem playerItemWithAsset:_clipAssets[_currentlyBufferingIndex]
automaticallyLoadedAssetKeys:@[@"tracks"]];
_bufferingPlayer = [AVPlayer playerWithPlayerItem:_bufferingPlayerItem];
}
}
- (float)bufferedSeconds:(float)playableDuration {
__block float seconds = 0.0; // total duration of clips already buffered
if (_currentlyBufferingIndex > 0) {
[_clipEndOffsets enumerateObjectsUsingBlock:^(id offset, NSUInteger idx, BOOL *stop) {
if (idx + 1 >= _currentlyBufferingIndex) {
seconds = [offset floatValue];
*stop = YES;
}
}];
}
return seconds + playableDuration;
}
- (NSNumber *)calculateBufferedDuration {
AVPlayerItem *video = _bufferingPlayer.currentItem;
if (video.status == AVPlayerItemStatusReadyToPlay) {
__block float longestPlayableRangeSeconds;
[video.loadedTimeRanges enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
CMTimeRange timeRange = [obj CMTimeRangeValue];
float seconds = CMTimeGetSeconds(CMTimeRangeGetEnd(timeRange));
if (seconds > 0.1) {
if (!longestPlayableRangeSeconds) {
longestPlayableRangeSeconds = seconds;
} else if (seconds > longestPlayableRangeSeconds) {
longestPlayableRangeSeconds = seconds;
}
}
}];
Float64 playableDuration = longestPlayableRangeSeconds;
if (playableDuration && playableDuration > 0) {
return [NSNumber numberWithFloat:longestPlayableRangeSeconds];
}
}
return [NSNumber numberWithInteger:0];
}
最初看起来这就像一个魅力,但后来我切换到另一组测试片然后缓冲再次非常缓慢(缓冲播放器帮助,但还不够)。似乎加载到缓冲播放器中的资产的loadedTimeRange
与loadedTimeRange
内相同资产的AVMutableComposition
不匹配:即使在loadedTimeRange
之后加载到缓冲播放器中的每个项目的s表示整个资产已被缓冲,主要播放器的视频继续口吃(而音频无缝播放到最后)。同样,一旦主要播放器通过所有剪辑一次,重绕后回放是无缝的。
我希望这个问题的答案,无论它是什么,都将成为其他iOS开发人员试图实现这个基本用例的起点。谢谢!
编辑:自从我发布此问题以来,我为此做了以下解决方法。希望这可以帮助那些遇到麻烦的人。
我最终做的是维护两个缓冲播放器(两个AVPlayer
s),它们开始缓冲前两个剪辑,然后继续移动到{{{后面的最低索引的无缓冲剪辑1}}表示已完成对当前剪辑的缓冲。我根据当前缓冲的剪辑和缓冲播放器的loadedTimeRanges
以及一小部分进行了逻辑暂停/取消暂停播放。这需要一些簿记变量,但并不太复杂。
这就是缓冲播放器的初始化方式(我在这里省略了簿记逻辑):
loadedTimeRanges
此外,我需要确保视频和音轨没有被合并,因为它们被添加到- (void)startBufferingClips
{
_bufferingPlayerItemA = [AVPlayerItem playerItemWithAsset:_clipAssets[0]
automaticallyLoadedAssetKeys:@[@"tracks"]];
_bufferingPlayerA = [AVPlayer playerWithPlayerItem:_bufferingPlayerItemA];
_currentlyBufferingIndexA = [NSNumber numberWithInt:0];
if ([_clipAssets count] > 1) {
_bufferingPlayerItemB = [AVPlayerItem playerItemWithAsset:_clipAssets[1]
automaticallyLoadedAssetKeys:@[@"tracks"]];
_bufferingPlayerB = [AVPlayer playerWithPlayerItem:_bufferingPlayerItemB];
_currentlyBufferingIndexB = [NSNumber numberWithInt:1];
_nextIndexToBuffer = [NSNumber numberWithInt:2];
} else {
_nextIndexToBuffer = [NSNumber numberWithInt:1];
}
}
,因为这显然干扰了缓冲(可能它们没有注册为相同视频/音频轨道就像缓冲播放器正在加载的那样,因此没有收到新数据)。以下是从AVMutableComposition
s:
AVMutableComposition
的代码
NSAsset
使用这种方法,在主播放器上使用- (AVPlayerItem*)playerItemForAssets:(NSMutableArray *)assets
{
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime timeOffset = kCMTimeZero;
for (AVAsset* asset in assets) {
CMTimeRange editRange = CMTimeRangeMake(CMTimeMake(0, 600), asset.duration);
NSError *editError;
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
NSArray *audioTracks = [asset tracksWithMediaType:AVMediaTypeAudio];
if ([videoTracks count] > 0) {
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
[compVideoTrack insertTimeRange:editRange
ofTrack:videoTrack
atTime:timeOffset
error:&editError];
}
if ([audioTracks count] > 0) {
AVAssetTrack *audioTrack = [audioTracks objectAtIndex:0];
[compAudioTrack insertTimeRange:editRange
ofTrack:audioTrack
atTime:timeOffset
error:&editError];
}
if ([videoTracks count] > 0 || [audioTracks count] > 0) {
timeOffset = CMTimeAdd(timeOffset, asset.duration);
}
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
return playerItem;
}
时缓冲可以很好地运行,至少在我的设置中是这样。