我想从带有java的视频文件(mov)中获取帧样本(jpeg)。是否有捷径可寻。当我在谷歌搜索所有我能找到的是从多个jpgs制作mov。我不知道也许我找不到合适的关键词。
答案 0 :(得分:55)
我知道原来的问题已经解决了,不过,我发布这个答案,万一其他人像我一样被卡住了。
从昨天起,我已经尝试过所有事情,我的意思是做到这一点。所有可用的Java库都已过时,不再维护或缺少任何可用的文档(严重??!?!)
我尝试了JFM(旧的和无用的),JCodec(没有任何文档),JJMpeg(看起来很有前途,但由于缺少Java级文档而使用非常困难和繁琐),OpenCV自动Java构建和一些其他我不记得的图书馆。
最后,我决定看看JavaCV的(Github link)课程,瞧!它包含FFMPEG绑定和详细的文档。
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>javacv</artifactId>
<version>1.0</version>
</dependency>
事实证明,有一种非常简单的方法可以将视频文件从视频文件中提取到BufferedImage
,并通过扩展名提取JPEG文件。 FFmpegFrameGrabber类可以很容易地用于抓取单个帧并将它们转换为BufferedImage
。代码示例如下:
FFmpegFrameGrabber g = new FFmpegFrameGrabber("textures/video/anim.mp4");
g.start();
for (int i = 0 ; i < 50 ; i++) {
ImageIO.write(g.grab().getBufferedImage(), "png", new File("frame-dump/video-frame-" + System.currentTimeMillis() + ".png"));
}
g.stop();
基本上,此代码会转储视频的前50帧并将其保存为PNG文件。好处是内部搜索功能,适用于实际帧而非关键帧(我使用JCodec时遇到的问题)
您可以参考JavaCV的主页,了解有关可用于从WebCams捕获帧的其他类的更多信息。希望此答案有所帮助: - )
答案 1 :(得分:5)
Xuggler完成这项工作。他们甚至提供了一个完全符合我需要的示例代码。链接在
之下我修改了此链接中的代码,使其仅保存视频的第一帧。
import javax.imageio.ImageIO;
import java.io.File;
import java.awt.image.BufferedImage;
import com.xuggle.mediatool.IMediaReader;
import com.xuggle.mediatool.MediaListenerAdapter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.mediatool.event.IVideoPictureEvent;
import com.xuggle.xuggler.Global;
/**
* * @author aclarke
* @author trebor
*/
public class DecodeAndCaptureFrames extends MediaListenerAdapter
{
private int mVideoStreamIndex = -1;
private boolean gotFirst = false;
private String saveFile;
private Exception e;
/** Construct a DecodeAndCaptureFrames which reads and captures
* frames from a video file.
*
* @param filename the name of the media file to read
*/
public DecodeAndCaptureFrames(String videoFile, String saveFile)throws Exception
{
// create a media reader for processing video
this.saveFile = saveFile;
this.e = null;
IMediaReader reader = ToolFactory.makeReader(videoFile);
// stipulate that we want BufferedImages created in BGR 24bit color space
reader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
// note that DecodeAndCaptureFrames is derived from
// MediaReader.ListenerAdapter and thus may be added as a listener
// to the MediaReader. DecodeAndCaptureFrames implements
// onVideoPicture().
reader.addListener(this);
// read out the contents of the media file, note that nothing else
// happens here. action happens in the onVideoPicture() method
// which is called when complete video pictures are extracted from
// the media source
while (reader.readPacket() == null && !gotFirst);
if (e != null)
throw e;
}
/**
* Called after a video frame has been decoded from a media stream.
* Optionally a BufferedImage version of the frame may be passed
* if the calling {@link IMediaReader} instance was configured to
* create BufferedImages.
*
* This method blocks, so return quickly.
*/
public void onVideoPicture(IVideoPictureEvent event)
{
try
{
// if the stream index does not match the selected stream index,
// then have a closer look
if (event.getStreamIndex() != mVideoStreamIndex)
{
// if the selected video stream id is not yet set, go ahead an
// select this lucky video stream
if (-1 == mVideoStreamIndex)
mVideoStreamIndex = event.getStreamIndex();
// otherwise return, no need to show frames from this video stream
else
return;
}
ImageIO.write(event.getImage(), "jpg", new File(saveFile));
gotFirst = true;
}
catch (Exception e)
{
this.e = e;
}
}
}
答案 2 :(得分:2)
以下是BoofCV:
的方法String fileName = UtilIO.pathExample("tracking/chipmunk.mjpeg");
MediaManager media = DefaultMediaManager.INSTANCE;
ConfigBackgroundBasic configBasic = new ConfigBackgroundBasic(30, 0.005f);
ImageType imageType = ImageType.single(GrayF32.class);
BackgroundModelMoving background = FactoryBackgroundModel.movingBasic(configBasic, new PointTransformHomography_F32(), imageType);
SimpleImageSequence video = media.openVideo(fileName, background.getImageType());
ImageBase nextFrame;
while(video.hasNext()) {
nextFrame = video.next();
// Now do something with it...
}
答案 3 :(得分:1)
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import org.bytedeco.javacpp.opencv_core.IplImage;
import org.bytedeco.javacv.FFmpegFrameGrabber;
import org.bytedeco.javacv.FrameGrabber.Exception;
public class Read{
public static void main(String []args) throws IOException, Exception
{
FFmpegFrameGrabber frameGrabber = new FFmpegFrameGrabber("C:/Users/Digilog/Downloads/Test.mp4");
frameGrabber.start();
IplImage i;
try {
i = frameGrabber.grab();
BufferedImage bi = i.getBufferedImage();
ImageIO.write(bi,"png", new File("D:/Img.png"));
frameGrabber.stop();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
答案 4 :(得分:0)
也许这会对你有所帮助:
Buffer buf = frameGrabber.grabFrame();
// Convert frame to an buffered image so it can be processed and saved
Image img = (new BufferToImage((VideoFormat) buf.getFormat()).createImage(buf));
buffImg = new BufferedImage(img.getWidth(this), img.getHeight(this), BufferedImage.TYPE_INT_RGB);
//TODO saving the buffImg
获取更多信息:
答案 5 :(得分:0)
下面显示了从媒体文件请求帧的基本代码。
完整的源代码和视频演示:
"Media File Processing"示例使用 Marvin Framework ..
public class MediaFileExample implements Runnable{
private MarvinVideoInterface videoAdapter;
private MarvinImage videoFrame;
public MediaFileExample(){
try{
// Create the VideoAdapter used to load the video file
videoAdapter = new MarvinJavaCVAdapter();
videoAdapter.loadResource("./res/snooker.wmv");
// Start the thread for requesting the video frames
new Thread(this).start();
}
catch(MarvinVideoInterfaceException e){e.printStackTrace();}
}
@Override
public void run() {
try{
while(true){
// Request a video frame
videoFrame = videoAdapter.getFrame();
}
}catch(MarvinVideoInterfaceException e){e.printStackTrace();}
}
public static void main(String[] args) {
MediaFileExample m = new MediaFileExample();
}
}