我正在开发一个使用mp4parser库(isoparser-1.0-RC-27.jar和aspectjrt-1.8.0.jar)合并mp4剪辑的应用程序。合并两个剪辑时,它们会成为一个剪辑,但随着更多剪辑被添加到其中,输出mp4会在视频后面显示音频。
以下是代码:
Movie[] clips = new Movie[2];
//location of the movie clip storage
File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES), "TestMerge");
//Build the two clips into movies
Movie firstClip = MovieCreator.build(first);
Movie secondClip = MovieCreator.build(second);
//Add both movie clips
clips[0] = firstClip;
clips[1] = secondClip;
//List for audio and video tracks
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
//Iterate all the movie clips and find the audio and videos
for (Movie movie: clips) {
for (Track track : movie.getTracks()) {
if (track.getHandler().equals("soun"))
audioTracks.add(track);
if (track.getHandler().equals("vide"))
videoTracks.add(track);
}
}
//Result movie from putting the audio and video together from the two clips
Movie result = new Movie();
//Append all audio and video
if (videoTracks.size() > 0)
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
if (audioTracks.size() > 0)
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
//Output the resulting movie to a new mp4 file
String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());
String outputLocation = mediaStorageDir.getPath()+timeStamp;
Container out = new DefaultMp4Builder().build(result);
FileChannel fc = new RandomAccessFile(String.format(outputLocation), "rw").getChannel();
out.writeContainer(fc);
fc.close();
//Now set the active URL to play as the combined videos!
setURL(outputLocation);
}
我的猜测是,随着更多剪辑被添加,视频与音频的同步正在混乱,因为如果合并两个更长的剪辑,那么音频/视频就可以了。反正是为了防止多个较小的剪辑中的视频和音频的这种差的同步,还是有人找到了使用mp4parser这样做的解决方案? FFMpeg是我正在考虑的另一个解决方案,但没有找到其他人使用它来做到这一点
编辑: 我发现音频通常比视频长,因此,当添加越来越多的剪辑来创建一个剪辑时,这就是导致最终结果视频偏移的原因。我将通过切断音频样本来解决问题
答案 0 :(得分:1)
将代码放到卢卡斯的答案上面:
1
LinkedList<Track> videoTracks = new LinkedList<>();
LinkedList<Track> audioTracks = new LinkedList<>();
double[] audioDuration = {0}, videoDuration = {0};
for (Movie m : clips) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
for (long a : t.getSampleDurations()) audioDuration[0] += ((double) a) / t.getTrackMetaData().getTimescale();
audioTracks.add(t);
} else if (t.getHandler().equals("vide")) {
for (long v : t.getSampleDurations()) videoDuration[0] += ((double) v) / t.getTrackMetaData().getTimescale();
videoTracks.add(t);
}
}
adjustDurations(videoTracks, audioTracks, videoDuration, audioDuration);
}
2
private void adjustDurations(LinkedList<Track> videoTracks, LinkedList<Track> audioTracks, double[] videoDuration, double[] audioDuration) {
double diff = audioDuration[0] - videoDuration[0];
//nothing to do
if (diff == 0) {
return;
}
//audio is longer
LinkedList<Track> tracks = audioTracks;
//video is longer
if (diff < 0) {
tracks = videoTracks;
diff *= -1;
}
Track track = tracks.getLast();
long[] sampleDurations = track.getSampleDurations();
long counter = 0;
for (int i = sampleDurations.length - 1; i > -1; i--) {
if (((double) (sampleDurations[i]) / track.getTrackMetaData().getTimescale()) > diff) {
break;
}
diff -= ((double) (sampleDurations[i]) / track.getTrackMetaData().getTimescale());
audioDuration[0] -= ((double) (sampleDurations[i]) / track.getTrackMetaData().getTimescale());
counter++;
}
if (counter == 0) {
return;
}
track = new CroppedTrack(track, 0, track.getSamples().size() - counter);
//update the original reference
tracks.removeLast();
tracks.addLast(track);
}
答案 1 :(得分:0)
通过使用上述编辑技术,我能够解决这个问题。诀窍是跟踪合并多少个剪辑,并从添加的最新剪辑的音轨末尾删除样本。随着产生的输出mp4随着更多剪辑而增长,你需要越来越多地去掉。这部分是由于音频和视频轨道的时序差异,因为音频轨道可能是1020ms而视频是1000ms,添加了5个剪辑,然后音频与视频长度的偏移量约为100ms。你必须弥补这一点。