我需要帮助将两个单独的音频文件连接在一起。我的第一首音轨持续7秒,我希望我的第二首音轨在第一首音轨后立即播放。
以下是我的代码:
private void initVideo(String fileName){
String inputAudioFilePath = "/Users/Document/Desktop/audio.mp3";
createEmptyMP4(folder + "/" + fileName + ".mp4");
writer = makeWriter(folder + "/" + fileName + ".mp4");
screenBounds = Toolkit.getDefaultToolkit().getScreenSize();
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MPEG4,screenBounds.width, screenBounds.height);
startTime = System.nanoTime();
IContainer containerAudio = IContainer.make();
if(containerAudio.open(inputAudioFilePath, IContainer.Type.READ, null) < 0){
throw new IllegalArgumentException("Cant find " + inputAudioFilePath);
}
// read audio file and create stream
IStreamCoder coderAudio = containerAudio.getStream(0).getStreamCoder();
if(coderAudio.open(null, null) < 0){
throw new RuntimeException("Cant open audio coder");
}
IPacket packetaudio = IPacket.make();
writer.addAudioStream(1, 0, coderAudio.getChannels(), coderAudio.getSampleRate());
int a = -1;
ArrayList<IAudioSamples> audioPac = new ArrayList<IAudioSamples>();
while ((a = containerAudio.readNextPacket(packetaudio)) >= 0){
if (a >= 0){
// audio packet
IAudioSamples samples = IAudioSamples.make(512, coderAudio.getChannels(), IAudioSamples.Format.FMT_S32);
coderAudio.decodeAudio(samples, packetaudio, 0);
audioPac.add(samples);
}
}
for(int j=0; j < 300; j++){
writer.encodeAudio(1, audioPac.get(j));
}
coderAudio.close();
containerAudio.close();
}
答案 0 :(得分:0)
假设你已经打开了两个音频文件:
coderAudio1&lt; --- first.mp3
coderAudio2&lt; --- second.mp3
将所有音频第一个样本写入 writer ,完成后,开始编写第二个音频样本。
通过一些更改和控制,您可以在两个音频之间添加视频帧。
答案 1 :(得分:0)
我写了这段代码,但是我收到了大文件的错误,我不知道原因。
import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.xuggler.IAudioSamples;
import com.xuggle.xuggler.IContainer;
import com.xuggle.xuggler.IPacket;
import com.xuggle.xuggler.IStreamCoder;
/**
*
* @author Pasban
*/
public class xuggleMergeAudioTest {
public static void main(String[] args) {
IContainer countainer1 = IContainer.make();
countainer1.open("audio.mp3", IContainer.Type.READ, null);
IStreamCoder audio1 = countainer1.getStream(0).getStreamCoder();
audio1.open(null, null);
IContainer countainer2 = IContainer.make();
countainer2.open("audio2.mp3", IContainer.Type.READ, null);
IStreamCoder audio2 = countainer2.getStream(0).getStreamCoder();
audio2.open(null, null);
IMediaWriter mWriter = ToolFactory.makeWriter("concatinated.mp3");
mWriter.addAudioStream(0, 0, audio1.getChannels(), audio1.getSampleRate());
IPacket packet = IPacket.make();
while (countainer1.readNextPacket(packet) >= 0) {
IAudioSamples samples = IAudioSamples.make(512, audio1.getChannels(), IAudioSamples.Format.FMT_S32);
audio1.decodeAudio(samples, packet, 0);
mWriter.encodeAudio(0, samples);
}
while (countainer2.readNextPacket(packet) >= 0) {
IAudioSamples samples = IAudioSamples.make(512, audio2.getChannels(), IAudioSamples.Format.FMT_S32);
audio2.decodeAudio(samples, packet, 0);
mWriter.encodeAudio(0, samples);
}
mWriter.close();
}
}
你也试过Xuggle的这个演示: https://code.google.com/p/xuggle/source/browse/trunk/java/xuggle-xuggler/src/com/xuggle/mediatool/demos/ConcatenateAudioAndVideo.java?r=929
我认为你必须检查两个音频机器中的通道和采样率。
答案 2 :(得分:0)
Creo que esta es una mejora,sirve para n archivos de audio
import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.xuggler.IAudioSamples;
import com.xuggle.xuggler.IContainer;
import com.xuggle.xuggler.IPacket;
import com.xuggle.xuggler.IStreamCoder;
public class Concatenador_Audios {
public static void main(String[] args) {
ConcatenarAudios("D:\\out concatenate.mp3", "D:\\in Audio (1).mp3", "D:\\in Audio (2).mp3", "D:\\in Audio (3).mp3");
}
public static void ConcatenarAudios(String Ruta_AudioConcatenado,String... ruta_Audio) {
int n = ruta_Audio.length;
IMediaWriter mWriter = ToolFactory.makeWriter(Ruta_AudioConcatenado);
IPacket packet = IPacket.make();
for (int i = 0; i < n; i++) {
IContainer container = IContainer.make();
container.open(ruta_Audio[i], IContainer.Type.READ, null);
IStreamCoder audio = container.getStream(0).getStreamCoder();
audio.open(null, null);
if (i == 0) {
mWriter.addAudioStream(0, 0, audio.getChannels(), audio.getSampleRate());
}
while (container.readNextPacket(packet) >= 0) {
IAudioSamples samples = IAudioSamples.make(512, audio.getChannels(), IAudioSamples.Format.FMT_S32);
audio.decodeAudio(samples, packet, 0);
mWriter.encodeAudio(0, samples);
}
container.close();
audio.close();
}
mWriter.close();
}
}