WebRTC Publishing
On this page:
- About WebRTC
- How to organize publication through WebRTC
- WebRTC publication options (audio podcasts through WebRTC)
- Publishing streams via WHIP
About WebRTC
WebRTC is a P2P protocol of communication between two clients over an already established connection. For example, to communicate with each other by WebRTC, two browsers need to be connected by opening the same website in the Internet. Connection can also be established by means of a mediator, so-called signaling server.
So there are two clients and a signaling server that connects these clients. Before starting to transmit video data, the clients need to establish the connection. To do so, they exchange data of two types about the connection:
- Textual descriptions of media streams in the SDP format
- ICE Candidates as part of an SDP
The signaling server (the mediator) makes it possible to transfer the data about the connection from one client to the other.
WebRTC is the ideal for webinars, online communication, video chats.
For more details, see Using WebRTC protocol.
How to organize publication using WebRTC
Warning
Some browsers allow video and audio publishing through WebRTC by using secure connection only. The browser migth deny access to the camera and microphone from a page located not by HTTPS but by HTTP address. But this is allowed on local addresses (localhost, 127.0.0.1).
On the Flussonic server, add a published stream to the configuration, this is a stream with the source publish://
.
stream published {
input publish://;
}
You can also add a stream through the Flussonic UI:
- Head to Media tab and add a stream clicking on Add button next to the Streams section.
- Then in the stream settings go to the Input tab and specify `publish://`` in the URL field. Make sure that Published input is accept:
Now the code must be run on the client side that publishes video to the created stream. To write the code use the Flussonic WebRTC player library.
To configure publishing through WebRTC:
- Head to options on the same Input tab:
- Set the necessary values:
Installing the library components via NPM and webpack
To import our library to your project with webpack, download the package:
npm install --save @flussonic/flussonic-webrtc-player
Then import components to your application:
import {
PUBLISHER_EVENTS,
PLAYER_EVENTS,
Player,
Publisher,
} from "@flussonic/flussonic-webrtc-player";
The description of the library classes can be found at npm.
See also the demo application, which code is also found further on this page.
Installing the library components without NPM and webpack
Add this line to the script section of your HTML page:
<script src="https://cdn.jsdelivr.net/npm/@flussonic/flussonic-webrtc-player/dist/index.min.js"></script>
The example of a webpage containing the player code is further on this page.
WebRTC publication options
Audio podcasts through WebRTC
To publish only the audio track without video, in the options of the publisher
instance use the following configuration in constraints
:
import Publisher from '../publisher';
//...
publisher = new Publisher(
//...
constraints: {
video: false,
audio: true,
},
//...
);
If you omit the option video
altogether, the result will be the same — only the audio track will be published to Flussonic.
To play such a stream, no additional configuring is needed.
Muting a publication
To mute a publication, use the mute
method:
import Publisher from '../publisher';
//...
publisher = new Publisher(*your options*);
//...
publisher.mute();
//...
If you bind the mute
method to a button in your client app, the user will be able to disable the sound in the output stream during its publishing. The demo application has an example of such a button.
Capturing screen
WebRTC player allows to publish captured screen that can be useful for demonstrations. To do this, use the shareScreen
method:
import Publisher from '../publisher';
//...
publisher = new Publisher(*your options*);
//...
shareScreen();
//...
To switch back to capturing video from camera, use this method once again. If you bind the shareScreen
method to a button in your client app, the user will be able to switch to capturing a screen and back to capturing form a camera during publishing.
Publishing streams via WHIP
For a long time, WebRTC has not been adopted in broadcasting and streaming industry because it has no standard signaling protocol and is too complex to be implemented in broadcasting tools and applications. To solve this problem, a new WHIP protocol was designed.
WHIP (WebRTC-HTTP ingest protocol) provides a simple and media server-agnostic way of injecting WebRTC streams that can be easily integrated in existing broadcasting tools. The whole WebRTC negotiation process in WHIP can be reduced to an HTTP POST request to send the SDP offer, and a 200/202 response from the media server to return the SDP answer instead. At the same time, WHIP keeps all the advantages of WebRTC such as low latency, resilience, bandwidth adaptation, encryption, supporting common codecs, adaptive bitrate, and so on.
Note
This protocol works similary to WHAP that is used for playing streams.
Flussonic Media Server allows you to publish streams via WHIP. It does not require any specific configuration for it. Just follow the steps described above for publishing via WebRTC, but in WebRTC player configuration insert the option whipwhap: true
in the Publisher
options.
import Publisher from '../publisher';
//...
publisher = new Publisher(
//...
whipwhap: true,
//...
);
The description of the Publisher
class and all its parameters can be found at npm.
After that, you can use the following URL for published stream into your application:
http://FLUSSONIC-IP:PORT/STREAM_NAME/whip