我正在尝试将音频添加到由以下开源项目
创建的视频中专门针对https://github.com/madisp/trails/blob/master/app/src/main/java/com/madisp/trails/CaptureService.java
我需要从MIC获取音频并将其作为音轨写入编码文件。目前使用Muxer编码的文件只有视频轨道。
我可以从MIC获得音频而不会出现任何问题
int nChannels = 1;
int minBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT) * 2;
AudioRecord aRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize);
short[] buffer = new short[44100 * nChannels];
aRecorder.startRecording();
int readSize = 0;
while (recording) {
readSize = aRecorder.read(buffer, 0, minBufferSize);
if (readSize < 0) {
break;
} else if (readSize > 0) {
// do stuff with buffer
}
}
aRecorder.stop();
aRecorder.release();
但我不确定如何将其纳入(https://github.com/madisp/trails/blob/master/app/src/main/java/com/madisp/trails/CaptureService.java)
while (running) {
int index = avc.dequeueOutputBuffer(info, 10000);
if (index == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
if (track != -1) {
throw new RuntimeException("format changed twice");
}
track = muxer.addTrack(avc.getOutputFormat());
muxer.start();
} else if (index >= 0) {
if ((info.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// ignore codec config
info.size = 0;
}
if (track != -1) {
ByteBuffer out = avc.getOutputBuffer(index);
out.position(info.offset);
out.limit(info.offset + info.size);
muxer.writeSampleData(track, out, info);
avc.releaseOutputBuffer(index, false);
}
}
}
是的,明白我确实要求你编写代码但我没有专门的知识
任何帮助表示赞赏
由于
答案 0 :(得分:1)
首先,使用byte[]
而不是short[]
作为AudioRecord
使用的缓冲区 - 这将简化一些事情。
然后,为了对接收到的缓冲区进行编码,这样的事情应该可以工作(未经测试):
while (recording) {
readSize = aRecorder.read(buffer, 0, minBufferSize);
if (readSize < 0) {
break;
} else if (readSize > 0) {
boolean done = false;
while (!done) {
int index = avc.dequeueInputBuffer(10000);
if (index >= 0) { // In case we didn't get any input buffer, it may be blocked by all output buffers being full, thus try to drain them below if we didn't get any
ByteBuffer in = avc.getIndexBuffer(index);
in.clear();
in.put(buffer, 0, readSize);
avc.queueInputBuffer(index, 0, readSize, System.nanoTime()/1000, 0);
done = true; // Done passing the input to the codec, but still check for available output below
}
index = avc.dequeueOutputBuffer(info, 10000);
if (index == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
if (track != -1) {
throw new RuntimeException("format changed twice");
}
track = muxer.addTrack(avc.getOutputFormat());
muxer.start();
} else if (index >= 0) {
if ((info.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// ignore codec config
info.size = 0;
}
if (track != -1 && info.size > 0) {
ByteBuffer out = avc.getOutputBuffer(index);
out.position(info.offset);
out.limit(info.offset + info.size);
muxer.writeSampleData(track, out, info);
avc.releaseOutputBuffer(index, false);
}
}
}
}
}
我认为普通的SW AAC编码器应该可以将任意数量的字节音频传递给它,但是如果编码器很挑剔,你需要以1024个样本的块传递记录的数据(2048)单声道的字节,立体声的4096字节。