我正在编写一个Android应用程序,可以在指定的时间内录制视频。如果我使用智能手机的后置摄像头进行录制,一切正常。该应用程序具有暂停/录制功能的功能,如Vine应用程序。使用设备的前置摄像头录制时会出现问题。当存储/播放视频时,视频表面框架看起来很好。到处都有很多关于这个问题的讨论。但我没有找到任何可行的解决方案。
查看下面提到的代码和图片。
这是从前置摄像头拍摄的原始图像。我把它颠倒过来以获得更好的观点。
以下是我在轮换后实际获得的内容:
方法:
IplImage copy = cvCloneImage(image);
IplImage rotatedImage = cvCreateImage(cvGetSize(copy), copy.depth(), copy.nChannels());
//Define Rotational Matrix
CvMat mapMatrix = cvCreateMat(2, 3, CV_32FC1);
//Define Mid Point
CvPoint2D32f centerPoint = new CvPoint2D32f();
centerPoint.x(copy.width() / 2);
centerPoint.y(copy.height() / 2);
//Get Rotational Matrix
cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);
//Rotate the Image
cvWarpAffine(copy, rotatedImage, mapMatrix, CV_INTER_CUBIC + CV_WARP_FILL_OUTLIERS, cvScalarAll(170));
cvReleaseImage(copy);
cvReleaseMat(mapMatrix);
我尝试过做
double angleTemp = angle;
angleTemp= ((angleTemp / 90)%4)*90;
final int number = (int) Math.abs(angleTemp/90);
for(int i = 0; i != number; ++i){
cvTranspose(rotatedImage, rotatedImage);
cvFlip(rotatedImage, rotatedImage, 0);
}
结束抛出异常,说源和目标与列数和行数不匹配。
更新
以这种方式录制视频。
IplImage newImage = null;
if(cameraSelection == CameraInfo.CAMERA_FACING_FRONT){
newImage = videoRecorder.rotate(yuvIplImage, 180);
videoRecorder.record(newImage);
}
else
videoRecorder.record(yuvIplImage);
以这种方式完成轮换:
IplImage img = IplImage.create(image.height(), image.width(),
image.depth(), image.nChannels());
for (int i = 0; i < 180; i++) {
cvTranspose(image, img);
cvFlip(img, img, 0);
}
如果您之前遇到过这种情况,有人可以指出这里有什么问题吗?
答案 0 :(得分:4)
看到你已经有了IplImage,你可能会觉得这很有帮助。我修改了此Open Source Android Touch-To-Record library的onPreviewFrame方法,以转置并调整已捕获的帧的大小。
我定义了&#34; yuvIplImage&#34;如我的setCameraParams()方法中所示。
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
同样初始化录像机,将宽度作为高度发送,反之亦然:
//call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);
//method implementation
public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
{
Log.e(TAG, "initVideoRecorder");
videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
videoRecorder.setVideoBitrate(1000000);
videoRecorder.setAudioBitrate(64000);
}
这是我的onPreviewFrame()方法:
@Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
此代码使用了一种方法&#34; YUV_NV21_TO_BGR&#34;,我在此link
中找到了该方法基本上我和你的问题一样,&#34; Android上的绿魔问题&#34;。在添加&#34; YUV_NV21_TO_BGR&#34;方法当我刚刚对YuvIplImage进行转置时,更重要的是转置,翻转(有或没有调整大小)的组合,结果视频中有绿色输出,几乎和你的一样。这&#34; YUV_NV21_TO_BGR&#34;方法删除了绿色输出问题。感谢来自google groups thread的@David Han。
另外你应该知道onPreviewFrame中的所有这些处理(转置,翻转和调整大小)需要花费很多时间,这会导致每秒帧数(FPS)率受到严重影响。当我在onPreviewFrame方法中使用此代码时,录制视频的最终FPS从30fps降至3帧/秒。
答案 1 :(得分:0)
进行转置时,图像宽度和高度值会被替换。就像你将一个矩形旋转90度,使高度变宽,反之亦然。所以你需要做一些如下的事情:
IplImage rotate(IplImage IplSrc)
{
IplImage img= IplImage.create(IplSrc.height(),
IplSrc.width(),
IplSrc.depth(),
IplSrc.nChannels());
cvTranspose(IplSrc, img);
cvFlip(img, img, 0);
//cvFlip(img, img, 0);
return img;
}
答案 2 :(得分:0)
private void ChangeOrientation() throws com.googlecode.javacv.FrameGrabber.Exception, com.googlecode.javacv.FrameRecorder.Exception {
//Initialize Frame Grabber
File f = new File(nativePath);
frameGrabber = new FFmpegFrameGrabber(f);
frameGrabber.start();
Frame captured_frame = null;
//Initialize Recorder
initRecorder() ;
//Loop through the grabber
boolean inLoop=true;
while (inLoop)
{
captured_frame = frameGrabber.grabFrame();
if (captured_frame == null)
{
//break loop
inLoop=false;
}
else if(inLoop)
{
// continue looping
IplSrc=captured_frame.image;
recorder.reocord(rotateImg(IplSrc));
}
}
if (recorder != null )
{
recorder.stop();
recorder.release();
frameGrabber.stop();
initRecorder=false;
}
}
private void initRecorder() throws com.googlecode.javacv.FrameRecorder.Exception
{
recorder = new FFmpegFrameRecorder(editedPath,
frameGrabber.getImageWidth(),
frameGrabber.getImageHeight(),
frameGrabber.getAudioChannels());
recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
recorder.setFormat("mp4");
recorder.setFrameRate(frameGrabber.getFrameRate());
FrameRate=frameGrabber.getFrameRate();
}
recorder.setSampleFormat(frameGrabber.getSampleFormat());
recorder.setSampleRate(frameGrabber.getSampleRate());
recorder.start();
initRecorder=true;
}
答案 3 :(得分:0)
这段代码将帮助您在旋转IplImage时处理问题
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
//IplImage newImage = cvCreateImage(cvGetSize(yuvIplimage), IPL_DEPTH_8U, 1);
if (recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
yuvimage = IplImage.create(imageWidth, imageHeight * 3 / 2, IPL_DEPTH_8U,1);
yuvimage.getByteBuffer().put(data);
rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3);
opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);
IplImage rotateimage=null;
try {
recorder.setTimestamp(videoTimestamp);
int rot=0;
switch (degrees) {
case 0:
rot =1;
rotateimage=rotate(rgbimage,rot);
break;
case 180:
rot = -1;
rotateimage=rotate(rgbimage,rot);
break;
default:
rotateimage=rgbimage;
}
recorder.record(rotateimage);
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
}
}
IplImage rotate(IplImage IplSrc,int angle) {
IplImage img= IplImage.create(IplSrc.height(), IplSrc.width(), IplSrc.depth(), IplSrc.nChannels());
cvTranspose(IplSrc, img);
cvFlip(img, img, angle);
return img;
}
}