TargetDataLine
是用Java捕获麦克风输入的最简单方法。我想用屏幕视频[在屏幕录像机软件中]对我捕获的音频进行编码,以便用户可以创建教程,幻灯片盒等。
我使用Xuggler
对视频进行编码
他们有关于使用视频编码音频的教程,但他们从文件中获取音频。就我而言,音频是现场的
com.xuggle.mediaTool.IMediaWriter
。 IMediaWriter对象允许我添加视频流并具有encodeAudio(int streamIndex, short[] samples, long timeStamp, TimeUnit timeUnit)
short[]
,我可以使用它。它返回byte[]
参考:
1. TargetDataLine的DavaDoc:http://docs.oracle.com/javase/1.4.2/docs/api/javax/sound/sampled/TargetDataLine.html
2. Xuggler文档:http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/javadoc/java/api/index.html
public void run(){
final IRational FRAME_RATE = IRational.make(frameRate, 1);
final IMediaWriter writer = ToolFactory.makeWriter(completeFileName);
writer.addVideoStream(0, 0,FRAME_RATE, recordingArea.width, recordingArea.height);
long startTime = System.nanoTime();
while(keepCapturing==true){
image = bot.createScreenCapture(recordingArea);
PointerInfo pointerInfo = MouseInfo.getPointerInfo();
Point globalPosition = pointerInfo.getLocation();
int relativeX = globalPosition.x - recordingArea.x;
int relativeY = globalPosition.y - recordingArea.y;
BufferedImage bgr = convertToType(image,BufferedImage.TYPE_3BYTE_BGR);
if(cursor!=null){
bgr.getGraphics().drawImage(((ImageIcon)cursor).getImage(), relativeX,relativeY,null);
}
try{
writer.encodeVideo(0,bgr,System.nanoTime()-startTime,TimeUnit.NANOSECONDS);
}catch(Exception e){
writer.close();
JOptionPane.showMessageDialog(null,
"Recording will stop abruptly because" +
"an error has occured", "Error",JOptionPane.ERROR_MESSAGE,null);
}
try{
sleep(sleepTime);
}catch(InterruptedException e){
e.printStackTrace();
}
}
writer.close();
}
答案 0 :(得分:2)
我最近在这个问题上回答了大部分内容:Xuggler encoding and muxing
代码示例:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
对于TargetDataLine,getMicrosecondPosition()会告诉您audioSamplesTime所需的时间。这似乎从TargetDataLine
打开时开始。您需要弄清楚如何获取以同一时钟为参考的视频时间戳,这取决于视频设备和/或您捕获视频的方式。只要它们都使用相同的时钟,绝对值就无关紧要了。您可以从视频和音频时间中减去初始值(在流的开头),以便时间戳匹配,但这只是一个近似匹配(实际上可能足够接近)。
您需要严格按时间顺序致电encodeVideo
和encodeAudio
;你可能需要缓冲一些音频和一些视频,以确保你可以做到这一点。更多详情here。