我有一个AAC格式的音频文件,我试图将其转换为原始格式的PCM文件,以便将其与另一个音频文件混合并稍后使用AudioTrack播放。
经过一些研究后,我遇到了this library,它恰当地解码了我的AAC文件。但是,它只将解码后的字节直接传递给AudioTrack。当尝试将解码的字节写入输出流时,生成的文件仅包含噪声。
这是我用来解码AAC文件的代码 -
public void AACDecoderAndPlay() {
ByteBuffer[] inputBuffers = mDecoder.getInputBuffers();
ByteBuffer[] outputBuffers = mDecoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
// create an audiotrack object
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, JamboxAudioTrack.FREQUENCY,
JamboxAudioTrack.CHANNEL_CONFIGURATION, JamboxAudioTrack.AUDIO_ENCODING,
JamboxAudioTrack.BUFFER_SIZE, AudioTrack.MODE_STREAM);
audioTrack.play();
long bytesWritten = 0;
while (!eosReceived) {
int inIndex = mDecoder.dequeueInputBuffer(TIMEOUT_US);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
int sampleSize = mExtractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
// We shouldn't stop the playback at this point, just pass the EOS
// flag to mDecoder, we will get it again from the
// dequeueOutputBuffer
Log.d(LOG_TAG, "InputBuffer BUFFER_FLAG_END_OF_STREAM");
mDecoder.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
} else {
mDecoder.queueInputBuffer(inIndex, 0, sampleSize, mExtractor.getSampleTime(), 0);
mExtractor.advance();
}
int outIndex = mDecoder.dequeueOutputBuffer(info, TIMEOUT_US);
switch (outIndex) {
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d(LOG_TAG, "INFO_OUTPUT_BUFFERS_CHANGED");
outputBuffers = mDecoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
MediaFormat format = mDecoder.getOutputFormat();
Log.d(LOG_TAG, "New format " + format);
// audioTrack.setPlaybackRate(format.getInteger(MediaFormat.KEY_SAMPLE_RATE));
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.d(LOG_TAG, "dequeueOutputBuffer timed out!");
break;
default:
ByteBuffer outBuffer = outputBuffers[outIndex];
Log.v(LOG_TAG, "We can't use this buffer but render it due to the API limit, " + outBuffer);
final byte[] chunk = new byte[info.size];
outBuffer.get(chunk); // Read the buffer all at once
outBuffer.clear(); // ** MUST DO!!! OTHERWISE THE NEXT TIME YOU GET THIS SAME BUFFER BAD THINGS WILL HAPPEN
audioTrack.write(chunk, info.offset, info.offset + info.size); // AudioTrack write data
if (info.offset > 0) {
Log.v(LOG_TAG, "" + info.offset);
}
try {
mOutputStream.write(chunk, info.offset, info.offset + info.size);
bytesWritten += info.offset + info.size;
} catch (IOException e) {
e.printStackTrace();
}
mDecoder.releaseOutputBuffer(outIndex, false);
break;
}
// All decoded frames have been rendered, we can stop playing now
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d(LOG_TAG, "OutputBuffer BUFFER_FLAG_END_OF_STREAM");
break;
}
}
}
Log.v(LOG_TAG, "Bytes written: " + bytesWritten);
mDecoder.stop();
mDecoder.release();
mDecoder = null;
mExtractor.release();
mExtractor = null;
audioTrack.stop();
audioTrack.release();
audioTrack = null;
}
要播放解码文件,我使用一个简单的AudioTrack,它从缓冲区读取和播放 -
public void start() {
new Thread(new Runnable() {
public void run() {
try {
Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
InputStream inputStream = new FileInputStream(playingFile);
BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);
DataInputStream dataInputStream = new DataInputStream(bufferedInputStream);
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, FREQUENCY,
CHANNEL_CONFIGURATION, AUDIO_ENCODING, BUFFER_SIZE, AudioTrack.MODE_STREAM);
short[] buffer = new short[BUFFER_SIZE / 4];
long startTime = System.currentTimeMillis();
track.play();
while (dataInputStream.available() > 0) {
int i = 0;
while (dataInputStream.available() > 0 && i < buffer.length
) {
buffer[i] = dataInputStream.readShort();
i++;
}
track.write(buffer, 0, buffer.length);
if (latency < 0) {
latency = System.currentTimeMillis() - startTime;
}
}
//
// int i = 0;
// while((i = dataInputStream.read(buffer, 0, BUFFER_SIZE)) > -1){
// track.write(buffer, 0, i);
// }
track.stop();
track.release();
dataInputStream.close();
inputStream.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
}).start();
}
我错过了什么?
答案 0 :(得分:0)
您的问题似乎在于您将输出写为普通字节(尽管我的代码中没有看到mOutputStream
的设置)。这些普通字节将采用您平台的本机字节序(实际上是小端),但是使用DataInputStream
以与平台无关的方式(指定为大端)将其读作短路。
这里解决它的最简单方法是在播放时使用byte
数组而不是short
数组; AudioTrack采用字节和短数组,当给定一个字节数组时,它以正确的(本机)方式解释它,它与MediaCodec的输出相匹配。只需确保缓冲区大小是偶数个字节。
如果您确实需要将值设为short
s,则需要使用读取小端模式的读取器(所有当前的Android ABI都是小端)。似乎没有任何直接可用的API,但在实践中它并不太难。参见例如Java : DataInputStream replacement for endianness中的readLittleShort
方法,以获取有关如何执行此操作的示例。