这是我的问题,
我已经使用Red5实现了一个服务器端应用程序,它发送H.264编码的实时流,在客户端,流被接收为byte []
为了在Android客户端进行解码,我已经关注了 Javacv-FFmpeg 库。解码代码如下
public Frame decodeVideo(byte[] data,long timestamp){
frame.image = null;
frame.samples = null;
avcodec.av_init_packet(pkt);
BytePointer video_data = new BytePointer(data);
avcodec.AVCodec codec = avcodec.avcodec_find_decoder(codec_id);
video_c = null;
video_c = avcodec.avcodec_alloc_context3(codec);
video_c.width(320);
video_c.height(240);
video_c.pix_fmt(0);
video_c.flags2(video_c.flags2()|avcodec.CODEC_FLAG2_CHUNKS);
avcodec.avcodec_open2(video_c, codec, null))
picture = avcodec.avcodec_alloc_frame()
pkt.data(video_data);
pkt.size(data.length);
int len = avcodec.avcodec_decode_video2(video_c, picture, got_frame, pkt);
if ((len >= 0) && ( got_frame[0] != 0)) {
....
process the decoded frame into **IPLImage of Javacv** and render it with **Imageview** of Android
}
}
从服务器收到的数据如下
少数帧具有以下模式
17 01 00 00 00 00 00 00 02 09 10 00 00 00 0F 06 00 01 C0 01 07 09 08 04 9A 00 00 03 00 80 00 00 16 EF 65 88 80 07 00 05 6C 98 90 00 ...
许多帧具有以下模式
27 01 00 00 00 00 00 00 02 09 00 00 00 00 0C 06 01 07 09 08 05 9A 00 00 03 00 80 00 00 0D 77 41 9A 02 04 15 B5 06 20 E3 11 E2 3C 46 ....
使用用于解码器的H.264编解码器,解码器输出长度> 0但始终为got_frames = 0 对于MPEG1编解码器,解码器输出长度> 0且got_frames> 0但输出图像为绿色或失真。
然而,在javacv的FFmpegFrameGrabber代码之后,我可以使用与上面类似的代码解码本地文件(H.264编码)。
我想知道我缺少什么细节,以及与解码器相关的标题相关数据操作或设置编解码器。
任何建议,帮助表示赞赏 提前谢谢。
答案 0 :(得分:6)
Atlast ...终于在经过大量的RnD后开始工作了
我缺少的是alalyze视频帧结构。视频由“I”,“P”帧组成。“I”帧是信息帧,其存储关于下一个后续帧的信息。 “P”帧是图像帧,它保存实际的视频帧...
所以我需要在“I”帧中解码“P”帧w.r.t信息。
所以最终的代码如下:
public IplImage decodeFromVideo(byte[] data, long timeStamp) {
avcodec.av_init_packet(reveivedVideoPacket); // Empty AVPacket
/*
* Determine if the frame is a Data Frame or Key. IFrame 1 = PFrame 0 = Key
* Frame
*/
byte frameFlag = data[1];
byte[] subData = Arrays.copyOfRange(data, 5, data.length);
BytePointer videoData = new BytePointer(subData);
if (frameFlag == 0) {
avcodec.AVCodec codec = avcodec
.avcodec_find_decoder(avcodec.AV_CODEC_ID_H264);
if (codec != null) {
videoCodecContext = null;
videoCodecContext = avcodec.avcodec_alloc_context3(codec);
videoCodecContext.width(320);
videoCodecContext.height(240);
videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);
videoCodecContext.codec_type(avutil.AVMEDIA_TYPE_VIDEO);
videoCodecContext.extradata(videoData);
videoCodecContext.extradata_size(videoData.capacity());
videoCodecContext.flags2(videoCodecContext.flags2()
| avcodec.CODEC_FLAG2_CHUNKS);
avcodec.avcodec_open2(videoCodecContext, codec,
(PointerPointer) null);
if ((videoCodecContext.time_base().num() > 1000)
&& (videoCodecContext.time_base().den() == 1)) {
videoCodecContext.time_base().den(1000);
}
} else {
Log.e("test", "Codec could not be opened");
}
}
if ((decodedPicture = avcodec.avcodec_alloc_frame()) != null) {
if ((processedPicture = avcodec.avcodec_alloc_frame()) != null) {
int width = getImageWidth() > 0 ? getImageWidth()
: videoCodecContext.width();
int height = getImageHeight() > 0 ? getImageHeight()
: videoCodecContext.height();
switch (imageMode) {
case COLOR:
case GRAY:
int fmt = 3;
int size = avcodec.avpicture_get_size(fmt, width, height);
processPictureBuffer = new BytePointer(
avutil.av_malloc(size));
avcodec.avpicture_fill(new AVPicture(processedPicture),
processPictureBuffer, fmt, width, height);
returnImageFrame = opencv_core.IplImage.createHeader(320,
240, 8, 1);
break;
case RAW:
processPictureBuffer = null;
returnImageFrame = opencv_core.IplImage.createHeader(320,
240, 8, 1);
break;
default:
Log.d("showit",
"At default of swith case 1.$SwitchMap$com$googlecode$javacv$FrameGrabber$ImageMode[ imageMode.ordinal()]");
}
reveivedVideoPacket.data(videoData);
reveivedVideoPacket.size(videoData.capacity());
reveivedVideoPacket.pts(timeStamp);
videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);
decodedFrameLength = avcodec.avcodec_decode_video2(videoCodecContext,
decodedPicture, isVideoDecoded, reveivedVideoPacket);
if ((decodedFrameLength >= 0) && (isVideoDecoded[0] != 0)) {
.... Process image same as javacv .....
}
希望它会帮助别人..