Custom video capture native webrtc

webrtc opencv c++
webrtc native
webrtc server
opencv webrtc

According to webrtc discuss group topic at google cricket::VideoCapture will be deprecated soon. To customize a video source, we should implement VideoTrackSourceInterface. I tried implementing the Interface and didn't work. I implemented the interface an when I have a frame then called the event OnFrame(const webrtc::VideoFrame& frame) as following:

void StreamSource::OnFrame(const webrtc::VideoFrame& frame)
 rtc::scoped_refptr<webrtc::VideoFrameBuffer buffer(frame.video_frame_buffer());

} In at the event AddStreams() I create a videosource by the following code :

rtc::scoped_refptr<webrtc::VideoTrackInterface> video_track(
peer_connection_factory_->CreateVideoTrack( kVideoLabel,new mystream::StreamSource()));

My video does not play in the browser. What I'm doing wrong?

I used the base class AdaptedVideoTrackSource and I created a method FrameCaptured it's is called from my thread in this method I call the method OnFrame. It's work fine !!!

 class StreamSource : public rtc::AdaptedVideoTrackSource
   void OnFrameCaptured(const webrtc::VideoFrame& frame);

 void StreamSource::OnFrameCaptured(const webrtc::VideoFrame& frame) 

WebRTC Custom OpenCV Video Capture, WebRTC comes with an video device capture implementations for most platforms including Linux, Mac, Windows, iOS and Android, but what if we want to use� Configuration: OS: Ubuntu 18.04.1 LTS Libsourcey: commit: ce311ff, date: Sun Mar 25 12:32:55 2018, tag: 1.1.4 WebRTC: webrtc-22215-ab42706-linux-x64.tar.gz from this

I get the answer in google group

VideoFrame have enum type, as:

class VideoFrameBuffer : public rtc::RefCountInterface {
  // New frame buffer types will be added conservatively when there is an
  // opportunity to optimize the path between some pair of video source and
  // video sink.
  enum class Type {

Sending a custom video stream through WebRTC, However, it might be useful to send a remote video stream to a peer - for example running: -.� To easily replay a capture of a WebRTC call to reproduce an observed behavior. video_replay takes a captured RTP stream of video as an input file, decodes the stream with the WebRTC framework “offline”, and then displays the resulting output on screen.

To elaborate on user1658843's answer: create a custom video source class and define all the abstract methods. here is an example:

class CustomVideoSource : public rtc::AdaptedVideoTrackSource  {

    void OnFrameCaptured(const webrtc::VideoFrame& frame);
    void AddRef() const override;
    rtc::RefCountReleaseStatus Release() const override;
    SourceState state() const override;
    bool remote() const override;
    bool is_screencast() const override;
    absl::optional<bool> needs_denoising() const override;
    mutable volatile int ref_count_;

And implementations:

void CustomVideoSource::OnFrameCaptured(const webrtc::VideoFrame& frame) {

void CustomVideoSource::AddRef() const {

rtc::RefCountReleaseStatus CustomVideoSource::Release() const {
  const int count = rtc::AtomicOps::Decrement(&ref_count_);
  if (count == 0) {
    return rtc::RefCountReleaseStatus::kDroppedLastRef;
  return rtc::RefCountReleaseStatus::kOtherRefsRemained;

webrtc::MediaSourceInterface::SourceState CustomVideoSource::state() const {
  return kLive;

bool CustomVideoSource::remote() const {
  return false;

bool CustomVideoSource::is_screencast() const {
  return false;

absl::optional<bool> CustomVideoSource::needs_denoising() const {
  return false;

Keep in mind this is just to get it to work and not full implementations. You should implement the abstract methods properly instead of returning the hard coded values. To send a frame simply call OnFrameCaptured with the frame.

To add stream:

custom_source= new rtc::RefCountedObject<CustomVideoSource>();
// create video track from our custom source
rtc::scoped_refptr<webrtc::VideoTrackInterface> custom_video_track(
g_peer_connection_factory->CreateVideoTrack( kVideoLabel, custom_source));
//add to stream

I'm not an expert but doing a project on my own and implementing stuff along the way.Feel free to correct me or add to this code.

WebRTC native (C++) with custom VideoCapturer fails , I compiled the WebRTC native code (C++) and I'm playing with peerconnection Sending a custom video stream through WebRTC. WebRTC is used to create video call enabled p2p applications. By default it supports only local webcam and audio input to be sent to a peer. However, it might be useful to send a remote video stream to a peer - for example a RTSP stream from an IP camera. In this post I'll focus on modifying the peerconnection_client example to send a remote RTSP stream to another peer.

Capture a MediaStream From a Canvas, Video or Audio Element , This enables a video or audio stream from any of these elements to be recorded, live-streamed via WebRTC, or combined with effects or other MediaStream s in a � When video performance is high you get a better picture, smoother motion, and a more pleasing experience. When using WebRTC technology, you are reliant upon the performance of the video client that is contained in the web browser. Because no software is installed, the video engine in use is what the browser maker has created.

c++ - Custom video capture native webrtc, Я использовал базовый класс AdaptedVideoTrackSource, и я создал метод FrameCaptured, который вызван из моего потока в этом методе. Я вызываю� Both depend on webrtc-UWP and are effectively deprecated now. WebRTC has changed in several fundamental ways since microsoft had started the 3DST, MR-webrtc and WebRTCUWP. When they started, there wasn’t really any supported extensible model to either add new encoder/decoder paths nor for adding new or different video formats.

can Support customize local Video source in Unity like , [Regression] Media Foundation video capture failing on HoloLens 1 and 2 #74 I've a native plugin setup in unity which takes the render texture data As requested by @djee-ms on the mixedreality-webrtc Slack channel,� MixedReality-WebRTC is a collection of components to help mixed reality app developers integrate audio and video real-time communication into their application and improve their collaborative experience - microsoft/MixedReality-WebRTC

  • Can you elaborate on what you did to make it work? Or how you got there, maybe stuff that helped you? I have the same problem and I have no idea where to start on.
  • I'm also trying to achieve the same in C++. I'm pretty new to this and struggling with this for quite some time. If you don't mind, could you please direct me to a sample?