早上好。 所以我正在开发一个我需要流式传输实时视频的项目。使用FFmpeg的库通过RTMP连接使用C ++实现音频。据我所知,视频&音频使用两个不同的流,所以我现在正在尝试创建一个仅音频流。
我的问题:当调用FFmpeg的avcodec_fill_audio_frame()时,我收到错误代码-22。这告诉我什么都看不到作为FFmpeg库的文档......留下了很多不足之处。
我的MainForm.h有以下(相关)成员
private: NAudio::CoreAudioApi::MMDevice^ waveDevice_chat;
private: NAudio::Wave::WaveIn^ waveInput_chat;
private: NAudio::Wave::IWavePlayer^ waveOutput_chat;
private: NAudio::Wave::BufferedWaveProvider^ waveProvider_chat;
拳头步骤是连接麦克风:
System::Void MainForm::btnConnectMic_Click(System::Object^ sender, System::EventArgs^ e) {
msclr::interop::marshal_context context;
waveEnumerator_chat = gcnew NAudio::CoreAudioApi::MMDeviceEnumerator();
System::Collections::Generic::List<NAudio::CoreAudioApi::MMDevice^>^ wavePorts_chat = System::Linq::Enumerable::ToList(waveEnumerator_chat->EnumerateAudioEndPoints(DataFlow::Capture, DeviceState::Active));
if (wavePorts_chat->Count > 0) {
waveDevice_chat = (NAudio::CoreAudioApi::MMDevice^)(wavePorts_chat[0]);
waveDevice_chat->AudioEndpointVolume->Mute = false;
waveInput_chat = gcnew WaveIn();
waveInput_chat->BufferMilliseconds = 50;
waveInput_chat->DeviceNumber = 0;
waveInput_chat->WaveFormat = gcnew NAudio::Wave::WaveFormat(44100, 1);
waveInput_chat->DataAvailable += gcnew System::EventHandler<NAudio::Wave::WaveInEventArgs ^>(this, &MainForm::waveInput_data_available);
waveInput_chat->StartRecording();
waveProvider_chat = gcnew BufferedWaveProvider(gcnew NAudio::Wave::WaveFormat(44100, 1));
waveProvider_chat->DiscardOnBufferOverflow;
}
}
这是EventHandler的代码,当数据可用时由NAudio调用
void MainForm::waveInput_data_available(System::Object^ sender, WaveInEventArgs^ e) {
if (waveProvider_chat->BufferedBytes + e->BytesRecorded > waveProvider_chat->BufferLength)
waveProvider_chat->ClearBuffer();
else
waveProvider_chat->AddSamples(e->Buffer, 0, e->BytesRecorded);
}
最后,这里是应该填充我的音频帧的代码片段(这是在后台工作器的循环中运行)
uint8_t* new_buffer;
int result
AVFrame* a_frame = av_frame_alloc();
AVStream* astrm;
AVCodec* acodec = avcodec_find_encoder(AVCodecID::AV_CODEC_ID_AAC);
/*
*
*
*
*/
in the loop
if (read_buffer->Length <= 0)
continue;
new_buffer = (uint8_t*)av_malloc((size_t)waveProvider_chat->BufferedBytes);
for (int i = 0; i < waveProvider_chat->BufferedBytes; i++)
new_buffer[i] = (uint8_t)read_buffer[i];
AVPacket a_pkt;
av_init_packet(&a_pkt);
a_pkt.data = nullptr;
a_pkt.size = 0;
int got_a_packet = 0;
int a_encode = avcodec_fill_audio_frame(a_frame, astrm->codec->channels, astrm->codec->sample_fmt, new_buffer, read_buffer->Length, 0);
std::cout << "[FILL] encoded response: " << a_encode << std::endl;
答案 0 :(得分:1)
当您查看avcodec_fill_audio_frame的源代码时,您会看到
if (buf_size < needed_size)
return AVERROR(EINVAL);
其中EINVAL
恰好是22。
所以在你的情况下,我猜,缓冲区不够大。