Skip to content

WebRTC Playback

You can play any stream via WebRTC including streams published by WHIP as described in WebRTC Publishing or any other stream configured on your Flussonic server (see Data Source Types). You can use our embed.html player for the playback, or any player with WebRTC support, or your own application.

The playback is carried out via the WHEP standard. For more details about WebRTC, WHIP and WHEP, see Using WebRTC protocol.

URLs for playback via WebRTC

To play the stream, use our embed.html player opening the following URL in the browser:


http://FLUSSONIC-IP/STREAM_NAME/embed.html?proto=webrtc

where:

  • FLUSSONIC-IP is an IP address of your Flussonic server
  • STREAM_NAME is the name of your WebRTC stream

You can also use another player that supports WebRTC or develop your own application; in that case use the following URL:

http://FLUSSONIC-IP:PORT/STREAM_NAME/whep

See Streaming API reference.

The code must be run on the client side that plays video from that URL. Below you will find recommendations on developing such a client application including the full code example.

Recommendations on developing the client application

To write the code, use the Flussonic WebRTC player library. It can be installed in one of the ways described below.

The description of the library classes and the example code can be found at npm.

Installing the library components via NPM and webpack

To import our library to your project with webpack, download the package:


npm install --save @flussonic/flussonic-webrtc-player

Then import components to your application:


import {
  PUBLISHER_EVENTS,
  PLAYER_EVENTS,
  Player,
  Publisher,
} from "@flussonic/flussonic-webrtc-player";

See also the demo application.

Installing the library components without NPM and webpack

Add this line to the script section of your HTML page:

<script src="https://cdn.jsdelivr.net/npm/@flussonic/flussonic-webrtc-player/dist/index.min.js"></script>

The example of a webpage containing the player code is below.

Player examples — with Webpack and without Webpack

Our demo application that uses Webpack to import components:

  • Sample app with Webpack and our WebRTC player.

    In this example, the components are imported by Webpack into the application. You can download the application code and study how the player is implemented.

    Demo WebRTC player on JavaScript that obtains components via <script>:

  • The Flussonic WebRTC player library code for implementing the WebRTC player is available in the CDN https://www.jsdelivr.com, and you can import it to your web page. To do this, add the following line to the script section of your HTML file <script src="https://cdn.jsdelivr.net/npm/@flussonic/flussonic-webrtc-player/dist/index.min.js"></script>.

The example of a page with the player in JavaScript (the similar code is included in the demo application):

Click here to see the code

<!DOCTYPE html>
<html>
  <head>


        <style>
      .app {
        display: flex;
        flex-direction: column;
        justify-content: space-between;
        height: calc(100vh - 16px);
      }
      .container {
        margin-bottom: 32px;
      }
      .video-container {
        display: flex;
      }
      .controls {
      }
      .config {
      }
      #player {
        width: 640px; height: 480px; border-radius: 1px
      }
      .button {
        height: 20px;
        width: 96px;
      }
      .preview {
        position: absolute;
        right: 0;
        bottom: 0;
        z-index: 100;
      }

      .preview-text {
        position: absolute;
        left: 0;
        top: 0;
        padding: 8px;
        background: black;
        color: white;
        z-index: 10;
      }

      #preview-video {
        width: 320px;
        height: auto;
        max-width: 320px;
        max-height: 240px;
      }
    </style>
    <script src="https://cdn.jsdelivr.net/npm/@flussonic/flussonic-webrtc-player/dist/index.js"></script>
  </head>
  <body>
    <div class="app">
    <div class="preview">
        <div class="preview-text">preview</div>
        <video id="preview-video" controls="false" muted autoplay playsinline />
    </div>
      <div class="video-container">
        <video
                id="player"
                controls
                muted
                autoplay
                playsinline
        >
        </video>
        <pre id="debug"></pre>
      </div>

    <div class="container">
      <div class="config" id="config">
        <span id="hostContainer">
          <label for="host">Host: </label><input name="host" id="host" value="" />
        </span>
        <span id="nameContainer">
          <label for="name">Stream: </label><input name="name" id="name" value="" />
        </span>
      </div>
      <div class="controls" id="controls">
        <select id="quality">
          <option value="4:3:240">4:3 320x240</option>
          <option value="4:3:360">4:3 480x360</option>
          <option value="4:3:480">4:3 640x480</option>
          <option value="16:9:360" selected>16:9 640x360</option>
          <option value="16:9:540">16:9 960x540</option>
          <option value="16:9:720">16:9 1280x720 HD</option>
        </select>
        <button id="publish" class="button">Publish</button>
        <button id="play" class="button">Play</button>
        <button id="stop" class="button">Stop all</button>
      </div>
      <div class="errorMessageContainer" id="errorMessageContainer"></div>
    </div>

  <script>
    let wrtcPlayer = null;
    let publisher = null;

    const { Player, Publisher, PUBLISHER_EVENTS, PLAYER_EVENTS } = this.FlussonicWebRTC; 

    const getHostElement = () => document.getElementById('host');
    const getHostContainerElement = () => document.getElementById('hostContainer');
    const getNameElement = () => document.getElementById('name');
    const getNameContainerElement = () => document.getElementById('nameContainer');
    const getPlayerElement = () => document.getElementById('player');
    const getPlayElement = () => document.getElementById('play');
    const getPublishElement = () => document.getElementById('publish');
    const getStopElement = () => document.getElementById('stop');
    const getQualityElement = () => document.getElementById('stop');

    const getStreamUrl = (
      hostElement = getHostElement(),
      nameElement = getNameElement(),
    ) =>
      `${hostElement && hostElement.value}/${nameElement && nameElement.value}`;
    const getPublisherOpts = () => {
      const [, , height] = document.getElementById('quality').value.split(/:/);
      return {
        preview: document.getElementById('preview-video'),
        constraints: {
          // video: {
          //   height: { exact: height }
          // },
          video: true,
          audio: true,
        },
        canvasCallback: (canvasElement) => {
            window.myCanvasElement = canvasElement;
        },
      };
    };

    const getPlayer = (
      playerElement = getPlayerElement(),
      streamUrl = getStreamUrl(),
      playerOpts = {
        retryMax: 10,
        retryDelay: 1000,
      },
      shouldLog = true,
      log = (...defaultMessages) => (...passedMessages) =>
        console.log(...[...defaultMessages, ...passedMessages]),
    ) => {
      const player = new Player(playerElement, streamUrl, playerOpts, true);
      player.on(PLAYER_EVENTS.PLAY, log('Started playing', streamUrl));
      player.on(PLAYER_EVENTS.DEBUG, log('Debugging play'));
      return player;
    };

    const stopPublishing = () => {
      if (publisher) {
        publisher.stop && publisher.stop();
        publisher = null;
      }
    };

    const stopPlaying = () => {
      if (wrtcPlayer) {
        wrtcPlayer.destroy && wrtcPlayer.destroy();
        wrtcPlayer = null;
      }
    };

    const stop = () => {
      stopPublishing();
      stopPlaying();

      getPublishElement().innerText = 'Publish';
      getPlayElement().innerText = 'Play';
    };

    const play = () => {
      wrtcPlayer = getPlayer();
      getPlayElement().innerText = 'Playing...';
      wrtcPlayer.play();
    };

    const publish = () => {
      if (publisher) publisher.stop();

      publisher = new Publisher(getStreamUrl(), getPublisherOpts(), true);
      publisher.on(PUBLISHER_EVENTS.STREAMING, streaming);
      publisher.start();
    };

    const setDefaultValues = () => {
        getHostElement().value = config.host;
        getNameElement().value = config.name;
    };

    const setEventListeners = () => {
      // Set event listeners
      getPublishElement().addEventListener('click', publish);
      getPlayElement().addEventListener('click', play);
      getStopElement().addEventListener('click', stop);
      getQualityElement().onchange = publish;
    };

    const main = () => {
      setDefaultValues();
      setEventListeners();
    };

    const streaming = () => {
        getPublishElement().innerText = 'Publishing...';
        getPublishElement().disabled = true;

        // drawing on publishing canvas
        if (window.myCanvasElement) {
          const ctx = window.myCanvasElement.getContext('2d', { alpha: false });
          ctx.filter = 'sepia(0.75)'; // Testing filters
          ctx.font = '128px sans-serif';
          ctx.fillStyle = 'white';
          ctx.textAlign = 'center';
          ctx.textBaseline = 'center';
          (function loop() {
            ctx.fillText(
              `It's publishing to Flussonic!`,
              window.myCanvasElement.width / 2,
              window.myCanvasElement.height / 2,
              window.myCanvasElement.width - 100,
            );
            setTimeout(requestAnimationFrame(loop), 1000 / 30); // drawing at 30fps
          })();
      }
    };

    window.addEventListener('load', main);
  </script>
    </body>
</html>

Copy this code to a file, for example index.html, and open in the browser to check how the player works.

Load balancing with WHEP playback

Since WHEP is based on HTTP POST requests, you can use our load balancer to distribute play requests between servers in a cluster. The balancer will redirect POST requests to servers in the cluster using the 307 HTTP redirect code.

ABR and WebRTC

Flussonic supports adaptive bitrate streaming for WebRTC. ABR (Adaptive Bitrate) is an algorithm designed to deliver video efficiently to a wide range of devices. In adaptive bitrate streaming, multiple bitrate renditions of the same source are used by client players so that stream quality can adjust to the user's current network speed.
Media Server automatically switches between the stream resolutions based on the user's network conditions. With continuous data transmission, the user receives the video in the highest possible quality. This way, you can provide the best user experience for a viewer with fluctuating internet connection on any device.

Flussonic uses a mechanism to retrieve the data from a browser to compute the suitable bitrate for the user using NACK (Negative ACKnowledgement) packet loss indicators.

Additionally, Flussonic can use REMB or TWCC mechanisms for deciding if it is possible to switch to a higher bitrate (see Using REMB or TWCC for ABR.)

The ABR option is enabled for WebRTC by default. This means that players will work in auto mode until a user chooses the resolution manually. To enable the auto mode again, select it in the player's settings.

You can disable the ABR mode by removing the webrtc_abr option from the stream settings.

Make sure to configure the transcoder to define the tracks for ABR switching, for example:


stream webrtc-abr {
  input fake://;
  webrtc_abr;
  transcoder vb=1000k size=1920x1080 bf=0 vb=300 size=320x240 bf=0 ab=64k acodec=opus;
}

If you prefer to have more control over the adaptive bitrate streaming, specify additional parameters for webrtc_abr:

Parameter Description Example
start_track Video track number from which playback starts. Possible values: v1, v2, v3 and so on.

If not specified, or an audio track specified (start_track=a3), or a video track number does not exist, playback starts with the track number in the middle of the list (e.g. v2 if you have tracks v1, v2, and v3) and then adjusts to the bandwidth availability.

If some tracks are excluded by the query parameter ?filter=tracks:..., Flussonic searches for an available track with a lower number up to v0. If no track with a lower number was found, Flussonic searches for a closest track with a higher number.
start_track=v4
loss_count Packet loss counter. Specified as integer.
The default value is 2.
loss_count=3
up_window Switch bitrate to a higher value if in the last up_window number of seconds there were less than loss_count lost packets.
The default value is 20.
up_window=17
down_window Switch bitrate to a lower value if in the last down_window number of seconds there were more than loss_count lost packets.
The default value is 5.
down_window=6
ignore_remb If true, Flussonic ignores REMB (Receiver Estimated Maximum Bitrate) reported by a peer when switching bitrate to a higher value.

If false, the bitrate will not exceed the one sent by the client in the REMB.
The default value is false.
ignore_remb=true
bitrate_prober If true, Flussonic periodically sends probe packets to measure available bandwidth and switches bitrate to a higher value if possible. Learn more here: Using REMB or TWCC for ABR.
The default value is false.
bitrate_prober=true
bitrate_probing_interval The time interval of sending probe packets, in seconds. Learn more here: Using REMB or TWCC for ABR. bitrate_probing_interval=2

Using REMB or TWCC for ABR

When playing streams via WebRTC, Flussonic uses RTP protocol for sending video and audio frames. This protocol provides two mechanisms of measuring available bandwidth. Flussonic can use one of those mechanisms in ABR algorithm to decide if it is possible to switch bitrate to a higher value.

REMB

The first mechanism uses REMB (Receiver Estimated Maximum Bitrate) reported by the client. The bitrate of sent video will not exceed the one sent by the client in the REMB. However, if REMB grows, Flussonic can switch to a track with a higher value. Learn more about REMB.

This is a simple mechanism, however it has some drawbacks:

  • After temporary packet loss (for example, due to network connection failure), REMB falls dramatically and then increases too slowly (for 5-15 minutes). During this time Flussonic cannot switch to a track with better quality for a long time although the client is able to play it.
  • Flussonic cannot control this value because it is calculated on the client’s side.
  • This mechanism is marked as deprecated and probably will not be developed.

REMB mechanism is used in Flussonic by default, but you can switch it off by specifying ignore_remb=true in the stream’s configuration. In this case, REMB values reported by the client will be ignored.

TWCC

It is possible to switch on the second mechanism available as RTP extension: TWCC (Transport-wide Congestion Control). Learn more about the extension.

In this case, Flussonic adds to each sent packet an RTP header extension that contains the extension ID and the packet sequence number. The client sends back an RTCP feedback message containing the arrival times and sequence numbers of the packets received on a connection. Thus, Flussonic knows the sending time and receiving time of each packet and can calculate the difference between them. Also, Flussonic knows the size of each packet, so it can calculate the bitrate the packets were actually sent with.

To estimate maximum possible bitrate, Flussonic sends groups of so called probe packets at regular intervals. These packets are sent with a bitrate higher than the current one. When the packets are received, Flussonic calculates their actual bitrate as described above. If after some iteration the calculated bitrate exceeds the bitrate of the next (higher quality) track at 10%, Flussonic switches to the next track.

This mechanism provides more control and flexibility because most of its logic works on the send-site.

Note

Currently, Flussonic uses this mechanism in a test mode and for WHEP protocol only.

To use the TWCC mechanism, add the following parameters to the webrtc_abr directive in a stream configuration:

  • bitrate_prober=true – switch to using TWCC
  • bitrate_probing_interval– the time interval of sending probe packets, in seconds.

For example:


webrtc_abr bitrate_prober=true bitrate_probing_interval=2;