我已经用ios5.1成功构建了ffmpeg和iFrameExtractor,但是当我播放视频时,没有声音
// Register all formats and codecs
avcodec_register_all();
av_register_all();
avformat_network_init();
if(avformat_open_input(&pFormatCtx, [@"http://somesite.com/test.mp4" cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL) != 0) {
av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
goto initError;
}
日志
[swscaler @ 0xdd3000] No accelerated colorspace conversion found from
yuv420p to rgb24. 2012-10-22 20:42:47.344 iFrameExtractor[356:707]
video duration: 5102.840000 2012-10-22 20:42:47.412
iFrameExtractor[356:707] video size: 720 x 576 2012-10-22 20:42:47.454
iFrameExtractor[356:707] Application windows are expected to have a
root view
这是ffmpeg 0.11.1的配置文件:
#!/bin/tcsh -f
rm -rf compiled/*
./configure \
--cc=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc \
--as='/usr/local/bin/gas-preprocessor.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' \
--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk \
--target-os=darwin \
--arch=arm \
--cpu=cortex-a8 \
--extra-cflags='-arch armv7' \
--extra-ldflags='-arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk' \
--prefix=compiled/armv7 \
--enable-cross-compile \
--enable-nonfree \
--disable-armv5te \
--disable-swscale-alpha \
--disable-doc \
--disable-ffmpeg \
--disable-ffplay \
--disable-ffprobe \
--disable-ffserver \
--enable-decoder=h264 \
--enable-decoder=svq3 \
--disable-asm \
--disable-bzlib \
--disable-gpl \
--disable-shared \
--enable-static \
--disable-mmx \
--disable-neon \
--disable-decoders \
--disable-muxers \
--disable-demuxers \
--disable-devices \
--disable-parsers \
--disable-encoders \
--enable-protocols \
--disable-filters \
--disable-bsfs \
--disable-postproc \
--disable-debug
答案 0 :(得分:2)
这里没有足够的信息。
你想打开什么网址?
日志中有消息。我知道使用版本.11你会得到一些关于你不包括network_init的警告,但这不会阻止它工作。 在以前的版本中,有些东西已经改变了。您曾经能够附加?tcp来指定ffmpeg使用tcp但现在必须在字典中完成。
如果可能,请提供系统日志和构建日志。
以下是我们其中一个应用的示例
avcodec_register_all();
avdevice_register_all();
av_register_all();
avformat_network_init();
const char *filename = [url UTF8String];
NSLog(@"filename = %@" ,url);
// err = av_open_input_file(&avfContext, filename, NULL, 0, NULL);
AVDictionary *opts = 0;
if (usesTcp) {
av_dict_set(&opts, "rtsp_transport", "tcp", 0);
}
err = avformat_open_input(&avfContext, filename, NULL, &opts);
av_dict_free(&opts);
if (err) {
NSLog(@"Error: Could not open stream: %d", err);
return nil;
}
else {
NSLog(@"Opened stream");
}
答案 1 :(得分:2)
所以假设你有一段代码,如下所示你对音频做了什么,你必须使用其中一个音频api来处理它,如果你主要处理已知的类型,audioQueues可能是最简单的。
首先在初始化中从流
获取音频信息// Retrieve stream information
if(av_find_stream_info(pFormatCtx)<0)
return ; // Couldn't find stream information
// Find the first video stream
videoStream=-1;
for(int i=0; i<pFormatCtx->nb_streams; i++) {
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
{
videoStream=i;
}
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
{
audioStream=i;
NSLog(@"found audio stream");
}
}
Then later in your processing loop do something like this.
while(!frameFinished && av_read_frame(pFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
//do something with the video.
}
if(packet.stream_index==audioStream) {
// NSLog(@"audio stream");
//do something with the audio packet, here we simply add it to a processing
queue to be handled by another thread.
[audioPacketQueueLock lock];
audioPacketQueueSize += packet.size;
[audioPacketQueue addObject:[NSMutableData dataWithBytes:&packet length:sizeof(packet)]];
[audioPacketQueueLock unlock];
要播放音频,您可以查看一些示例