Skip to content

WebRTC Playback

About WebRTC

WebRTC is a P2P protocol of communication between two clients over an already established connection. For example, to communicate with each other by WebRTC, two browsers need to be connected by opening the same website in the Internet. Connection can also be established by means of a mediator, so-called signaling server.

So there are two clients and a signaling server that connects these clients. Before starting to transmit video data, the clients need to establish the connection. To do so, they exchange data of two types about the connection:

  • Textual descriptions of media streams in the SDP format
  • ICE Candidates as part of an SDP

The signaling server (the mediator) makes it possible to transfer the data about the connection from one client to the other.

WebRTC is the ideal for webinars, online communication, video chats.

For more details, see Using WebRTC protocol.

How to organize the playback of published streams via WebRTC

On the Flussonic server, a published stream must be configured where clients can publish video and from where we will take it for playback.

stream STREAM_NAME {
  input publish://;

ABR and WebRTC

Flussonic supports adaptive bitrate streaming for WebRTC. ABR (Adaptive Bitrate) is an algorithm designed to deliver video efficiently to a wide range of devices. In adaptive bitrate streaming, multiple bitrate renditions of the same source are used by client players so that stream quality can adjust to the user's current network speed.
Media Server automatically switches between the stream resolutions based on the user's network conditions. With continuous data transmission, the user receives the video in the highest possible quality. This way, you can provide the best user experience for a viewer with fluctuating internet connection on any device.

Flussonic uses a mechanism to retrieve the data from a browser to compute the suitable bitrate for the user using NACK (Negative ACKnowledgement) packet loss indicators.

Additionally, Flussonic can use REMB or TWCC mechanisms for deciding if it is possible to switch to a higher bitrate (see Using REMB or TWCC for ABR.)

If the ABR option is enabled for WebRTC, players will work in auto mode until a user chooses the resolution manually. To enable the auto mode again, select it in the player's settings.

To enable ABR mode for WebRTC in Flussonic, add webrtc_abr in the stream settings:

stream webrtc-abr {
  input fake://;
  transcoder vb=1000k size=1920x1080 bf=0 vb=300 size=320x240 bf=0 ab=64k acodec=opus;

If you prefer to have more control over the adaptive bitrate streaming, specify additional parameters for webrtc_abr:

Parameter Description Example
start_track Video track number from which playback starts. Possible values: v1, v2, v3 and so on. The default value is v1.

If not specified or an audio track specified (start_track=a3), playback starts from the v1 track and then adjusts to the bandwidth availability.

If a video track number does not exist, playback starts with the closest lower track number and then adjusts to the bandwidth availability.

If some tracks are excluded by the query parameter ?filter=tracks:..., Flussonic searches for an available track with a lower number up to v0. If no track with a lower number was found, Flussonic searches for a closest track with a higher number.
loss_count Packet loss counter. Specified as integer.
The default value is 2.
up_window Switch bitrate to a higher value if in the last up_window number of seconds there were less than loss_count lost packets.
The default value is 20.
down_window Switch bitrate to a lower value if in the last down_window number of seconds there were more than loss_count lost packets.
The default value is 5.
ignore_remb If true, Flussonic ignores REMB (Receiver Estimated Maximum Bitrate) reported by a peer when switching bitrate to a higher value.

If false, the bitrate will not exceed the one sent by the client in the REMB.
The default value is false.
bitrate_prober If true, Flussonic periodically sends probe packets to measure available bandwidth and switches bitrate to a higher value if possible. Learn more here: Using REMB or TWCC for ABR.
The default value is false.
bitrate_probing_interval The time interval of sending probe packets, in seconds. Learn more here: Using REMB or TWCC for ABR. bitrate_probing_interval=2

To play the stream, use either our embed.html player, opening the following URL in the browser:


, where:

  • FLUSSONIC-IP is an IP address of your Flussonic server
  • STREAM_NAME is the name of your WebRTC stream

or another player that supports WebRTC and specify the URL like so:

  • wss://FLUSSONIC-IP/STREAM_NAME/webrtc?transport=tcp — to deliver WebRTC stream over TCP.

The code must be run on the client side that plays video from the published stream. To write the code, use the Flussonic WebRTC player library.

The description of the library classes and the example code can be found at npm.

Installing the library components via NPM and webpack

To import our library to your project with webpack, download the package:

npm install --save @flussonic/flussonic-webrtc-player

Then import components to your application:

import {
} from "@flussonic/flussonic-webrtc-player";

The description of the library classes can be found at npm.

See also the demo application.

Installing the library components without NPM and webpack

Add this line to the script section of your HTML page:

<script src=""></script>

The example of a webpage containing the player code is below.

Player examples — with Webpack and without Webpack

Our demo application that uses Webpack to import components:

In this example, the components are imported by Webpack into the application. You can download the application code and study how the player is implemented.

Demo WebRTC player on JavaScript that obtains components via <script>:

  • The Flussonic WebRTC player library code for implementing the WebRTC player is available in the CDN, and you can import it to your web page. To do this, add the following line to the script section of your HTML file <script src=""></script>.

The example of a page with the player in JavaScript (the similar code is included in the demo application):

<!DOCTYPE html>

      .app {
        display: flex;
        flex-direction: column;
        justify-content: space-between;
        height: calc(100vh - 16px);
      .container {
        margin-bottom: 32px;
      .video-container {
        display: flex;
      .controls {
      .config {
      #player {
        width: 640px; height: 480px; border-radius: 1px
      .button {
        height: 20px;
        width: 96px;
    <script src=""></script>
    <div class="app">
      <div class="video-container">
        <pre id="debug"></pre>

    <div class="container">
      <div class="config" id="config">
        <span id="hostContainer">
          <label for="host">Host: </label><input name="host" id="host" value="" />
        <span id="nameContainer">
          <label for="name">Stream: </label><input name="name" id="name" value="" />
      <div class="controls" id="controls">
        <select id="quality">
          <option value="4:3:240">4:3 320x240</option>
          <option value="4:3:360">4:3 480x360</option>
          <option value="4:3:480">4:3 640x480</option>
          <option value="16:9:360" selected>16:9 640x360</option>
          <option value="16:9:540">16:9 960x540</option>
          <option value="16:9:720">16:9 1280x720 HD</option>
        <button id="publish" class="button">Publish</button>
        <button id="play" class="button">Play</button>
        <button id="stop" class="button">Stop all</button>
      <div class="errorMessageContainer" id="errorMessageContainer"></div>
    let wrtcPlayer = null;
    let publisher = null;

    const { Player, Publisher, PUBLISHER_EVENTS, PLAYER_EVENTS } = this.FlussonicWebRTC; 

    const getHostElement = () => document.getElementById('host');
    const getHostContainerElement = () => document.getElementById('hostContainer');
    const getNameElement = () => document.getElementById('name');
    const getNameContainerElement = () => document.getElementById('nameContainer');
    const getPlayerElement = () => document.getElementById('player');
    const getPlayElement = () => document.getElementById('play');
    const getPublishElement = () => document.getElementById('publish');
    const getStopElement = () => document.getElementById('stop');
    const getQualityElement = () => document.getElementById('stop');

    const getStreamUrl = (
      hostElement = getHostElement(),
      nameElement = getNameElement(),
    ) =>
      `${hostElement && hostElement.value}/${nameElement && nameElement.value}`;
    const getPublisherOpts = () => {
      const [, , height] = document.getElementById('quality').value.split(/:/);
      return {
        preview: document.getElementById('preview'),
        constraints: {
          // video: {
          //   height: { exact: height }
          // },
          video: true,
          audio: true,

    const getPlayer = (
      playerElement = getPlayerElement(),
      streamUrl = getStreamUrl(),
      playerOpts = {
        retryMax: 10,
        retryDelay: 1000,
      shouldLog = true,
      log = (...defaultMessages) => (...passedMessages) =>
        console.log(...[...defaultMessages, ...passedMessages]),
    ) => {
      const player = new Player(playerElement, streamUrl, playerOpts, true);
      player.on(PLAYER_EVENTS.PLAY, log('Started playing', streamUrl));
      player.on(PLAYER_EVENTS.DEBUG, log('Debugging play'));
      return player;

    const stopPublishing = () => {
      if (publisher) {
        publisher.stop && publisher.stop();
        publisher = null;

    const stopPlaying = () => {
      if (wrtcPlayer) {
        wrtcPlayer.destroy && wrtcPlayer.destroy();
        wrtcPlayer = null;

    const stop = () => {

      getPublishElement().innerText = 'Publish';
      getPlayElement().innerText = 'Play';

    const play = () => {
      wrtcPlayer = getPlayer();
      getPlayElement().innerText = 'Playing...';;

    const publish = () => {
      if (publisher) publisher.stop();

      publisher = new Publisher(getStreamUrl(), getPublisherOpts(), true);
      publisher.on(PUBLISHER_EVENTS.STREAMING, () => {
        getPublishElement().innerText = 'Publishing...';

    const setDefaultValues = () => {
        getHostElement().value =;
        getNameElement().value =;

    const setEventListeners = () => {
      // Set event listeners
      getPublishElement().addEventListener('click', publish);
      getPlayElement().addEventListener('click', play);
      getStopElement().addEventListener('click', stop);
      getQualityElement().onchange = publish;

    const main = () => {

    window.addEventListener('load', main);

Copy this code to a file, for example index.html, and open in the browser to check how the player works.

Playing streams via WHAP

For a long time, WebRTC has not been adopted in broadcasting and streaming industry because it has no standard signaling protocol and is too complex to be implemented in broadcasting tools and applications. To solve this problem, a new WHAP protocol was designed.

WHAP (WebRTC-HTTP access protocol) provides a simple and media server-agnostic way of playing WebRTC streams that can be easily integrated in existing broadcasting tools. The whole WebRTC negotiation process in WHAP can be reduced to an HTTP POST request to send the SDP offer, and a 200/202 response from the media server to return the SDP answer instead. At the same time, WHAP keeps all the advantages of WebRTC such as low latency, resilience, bandwidth adaptation, encryption, supporting common codecs, adaptive bitrate, and so on.


This protocol works similary to WHIP that is used for publishing streams.

Flussonic Media Server allows you to play streams via WHAP. It does not require any specific configuration for it. Just follow the steps described above for WebRTC playback, but in WebRTC player configuration insert the option whipwhap: true in the Player settings.

import Player from '../player';
player = new Player(
  whipwhap: true,

The description of the Player class and all its parameters can be found at npm.

After that, you can use the following URL for playing the stream in your application:


See Streaming API reference.

Using REMB or TWCC for ABR

When playing streams via WebRTC, Flussonic uses RTP protocol for sending video and audio frames. This protocol provides two mechanisms of measuring available bandwidth. Flussonic can use one of those mechanisms in ABR algorithm to decide if it is possible to switch bitrate to a higher value.


The first mechanism uses REMB (Receiver Estimated Maximum Bitrate) reported by the client. The bitrate of sent video will not exceed the one sent by the client in the REMB. However, if REMB grows, Flussonic can switch to a track with a higher value. Learn more about REMB.

This is a simple mechanism, however it has some drawbacks:

  • After temporary packet loss (for example, due to network connection failure), REMB falls dramatically and then increases too slowly (for 5-15 minutes). During this time Flussonic cannot switch to a track with better quality for a long time although the client is able to play it.
  • Flussonic cannot control this value because it is calculated on the client’s side.
  • This mechanism is marked as deprecated and probably will not be developed.

REMB mechanism is used in Flussonic by default, but you can switch it off by specifying ignore_remb=true in the stream’s configuration. In this case, REMB values reported by the client will be ignored.


It is possible to switch on the second mechanism available as RTP extension: TWCC (Transport-wide Congestion Control). Learn more about the extension.

In this case, Flussonic adds to each sent packet an RTP header extension that contains the extension ID and the packet sequence number. The client sends back an RTCP feedback message containing the arrival times and sequence numbers of the packets received on a connection. Thus, Flussonic knows the sending time and receiving time of each packet and can calculate the difference between them. Also, Flussonic knows the size of each packet, so it can calculate the bitrate the packets were actually sent with.

To estimate maximum possible bitrate, Flussonic sends groups of so called probe packets at regular intervals. These packets are sent with a bitrate higher than the current one. When the packets are received, Flussonic calculates their actual bitrate as described above. If after some iteration the calculated bitrate exceeds the bitrate of the next (higher quality) track at 10%, Flussonic switches to the next track.

This mechanism provides more control and flexibility because most of its logic works on the send-site.


Currently, Flussonic uses this mechanism in a test mode and for WHAP protocol only.

To use the TWCC mechanism, add the following parameters to the webrtc_abr directive in a stream configuration:

  • bitrate_prober=true – switch to using TWCC
  • bitrate_probing_interval– the time interval of sending probe packets, in seconds.

For example:

webrtc_abr bitrate_prober=true bitrate_probing_interval=2;