在webRTC中,在捕获器中调用onFrame后,没有帧从C ++代码传输到浏览器

时间:2017-06-28 12:41:37

标签: c++ webrtc

我正在努力让我的应用与webrtc保持同步。这基本上是一个桌面流应用程序,其中C ++应用程序视频使用webRTC流式传输到浏览器。 我的实现过去常常使用一堆已弃用的内容,例如SignalFrameCapturedcricket::CapturedFrame。 现在看webRTC似乎没有那些类/信号。

这是我的捕获者:

class Capturer
: public cricket::VideoCapturer,
  public webrtc::DesktopCapturer::Callback
{

 public:

   sigslot::signal1<Capturer*> SignalDestroyed;

   Capturer();
   ~Capturer();

   void ResetSupportedFormats(const std::vector<cricket::VideoFormat>& formats);
   bool CaptureFrame();
   cricket::CaptureState Start(const cricket::VideoFormat& format) override;
   void Stop() override;
   bool IsRunning() override;
   bool IsScreencast() const override;
   bool GetPreferredFourccs(std::vector<uint32_t>* fourccs) override;


   virtual void OnCaptureResult(webrtc::DesktopCapturer::Result result,

                         std::unique_ptr<webrtc::DesktopFrame> desktop_frame) override;


 private:
   bool running_;
   int64_t initial_timestamp_;
   int64_t next_timestamp_;
   std::unique_ptr<webrtc::DesktopCapturer> _capturer;

};
从另一个线程定期调用

Capturer::CaptureFrame()Capturer::OnCaptureResult按预期调用DesktopFrame作为参数。 现在看看OnCaptureResult

的实现
void Capturer::OnCaptureResult(webrtc::DesktopCapturer::Result result,

                          std::unique_ptr<webrtc::DesktopFrame> desktopFrame)
{

  if (result != webrtc::DesktopCapturer::Result::SUCCESS)
  {
   return; // Never called. Which leads me to conclude thedesktop capturer works
  }


  int width = desktopFrame->size().width();
  int height = desktopFrame->size().height();
  rtc::scoped_refptr<webrtc::I420Buffer> res_i420_frame = webrtc::I420Buffer::Create(width, height);


  webrtc::ConvertToI420(webrtc::VideoType::kABGR,
                        desktopFrame->data(),
                        0, 0,
                        width, height,
                        0,
                        webrtc::kVideoRotation_0,
                        res_i420_frame);

  webrtc::VideoFrame frame = webrtc::VideoFrame(res_i420_frame, webrtc::kVideoRotation_0, next_timestamp_ / rtc::kNumNanosecsPerMicrosec);


  this->OnFrame(frame, width, height);
}

从chrome :: webrtc-internals可以看到,没有帧传输到连接的浏览器。

No incoming video stream

回到代码,这是我创建peerConnection的方法:

void Conductor::connectToPeer() {

  this->_peerConnectionFactory = webrtc::CreatePeerConnectionFactory();

  if (!this->_peerConnectionFactory.get())
  {
     std::cerr << "Failed to initialize PeerConnectionFactory" << std::endl;
     throw new std::runtime_error("Cannot initialize PeerConnectionFactory");
  }   

  webrtc::PeerConnectionInterface::RTCConfiguration config;
  webrtc::PeerConnectionInterface::IceServer server;
  server.uri = "stun:stun.l.google.com:19302";
  config.servers.push_back(server);

  webrtc::FakeConstraints       constraints;constraints.AddOptional(webrtc::MediaConstraintsInterface::kEnableDtlsSrtp, "true");

  this->_peerConnection = this->_peerConnectionFactory->CreatePeerConnection(config, &constraints, NULL, NULL, this);

  if (!this->_peerConnection.get())
  {
      std::cerr << "Failed to initialize PeerConnection" << std::endl;
      throw new std::runtime_error("Cannot initialize PeerConnection");
  }


 auto capturer = new Capturer();

 CapturerThread *capturerThread = new CapturerThread(capturer); // This thread sole functionis to call CaptureFrameperiodically


 rtc::scoped_refptr<webrtc::VideoTrackSourceInterface> videoSource = this->_peerConnectionFactory->CreateVideoSource(capturer, NULL);
 rtc::scoped_refptr<webrtc::VideoTrackInterface> videoTrack(this->_peerConnectionFactory->CreateVideoTrack("video_label", videoSource));
 rtc::scoped_refptr<webrtc::MediaStreamInterface> stream = this->_peerConnectionFactory->CreateLocalMediaStream("stream_label");  

 stream->AddTrack(videoTrack);
 if (!this->_peerConnection->AddStream(stream))
 {
    std::cerr << "Adding stream to PeerConnection failed" << std::endl;
    throw new std::runtime_error("Cannot add stream");
 }

 typedef std::pair<std::string,rtc::scoped_refptr<webrtc::MediaStreamInterface>> MediaStreamPair;

 this->_activeStreams.insert(MediaStreamPair(stream->label(), stream));
 webrtc::SdpParseError error;

 webrtc::SessionDescriptionInterface* sessionDescription(webrtc::CreateSessionDescription("offer", this->_offer, &error));

 if (!sessionDescription)
 {
    std::cerr << "Cannot initialize session description." << std::endl;
    throw new std::runtime_error("Cannot set session description");
 }


 this->_peerConnection->SetRemoteDescription(DummySetSessionDescriptionObserver::Create(), sessionDescription);

 this->_peerConnection->CreateAnswer(this, NULL);
}

我的问题是,即使捕获器和对等连接都按预期到位,也没有视频帧传输到浏览器客户端。有什么我想念的吗?

2 个答案:

答案 0 :(得分:1)

我到底了。创建答案后,根本没有设置本地描述。

调用createAnswer有望触发OnSuccess中的CreateSessionDescriptionObserver回调(在我的情况下为Conductor)。

我没有将答案保存为本地描述。以下是OnSuccess方法的实现方式:

void Conductor::OnSuccess(webrtc::SessionDescriptionInterface *desc)
{    
  this->_peerConnection->SetLocalDescription(DummySetSessionDescriptionObserver::Create(), desc);
}

执行此操作最终会以OnIceGatheringChange作为参数触发kIceGatheringComplete,这意味着双方都已准备就绪。

答案 1 :(得分:1)

根据google小组讨论-webrtc thread

你应该实现VideoTrackSourceInterface。在我的例子中,我使用了基类AdaptedVideoTrackSource,并创建了一个方法FrameCaptured,它是从我的线程中调用的。在方法FrameCaptured中,我调用基本方法OnFrame。它的工作正常!!!

 class StreamSource : public rtc::AdaptedVideoTrackSource
 {
   void OnFrameCaptured(const webrtc::VideoFrame& frame);
 }

 void StreamSource::OnFrameCaptured(const webrtc::VideoFrame& frame) 
 {
   OnFrame(frame);
 }