Skip to content

Flussonic UGC Implementation Guideline

This guideline explains how to build a UGC service or platform with a Flussonic Media Server. Flussonic Media Server is a multi-purpose software solution for launching a high-load video streaming service of any scale. This document is intended for people who are willing to create their own UGC service or platform, are curious to learn how to build one, or are already taking their first steps in that direction. This document doesn't include information about installing or configuring the Flussonic Media Server. For information on installing Flussonic Media Server and getting started with it, see Installation and Quick Start.

"Introduction to UGC platforms" gives a brief overview of what UGC platform is, its features and application. In "Basic architecture design of UGC platform", we discuss the basic architecture design of a UGC platform/service, its components, intercommunication between the components, and ways to make your system resilient and scalable. A real-life project is examined in "Plan and Deploy: Real-Life Example".

Table of contents:

  1. Introduction to UGC platforms
  2. Basic architecture design of UGC platform
    2.1. Components
    2.1.1. Publishing server
    2.1.2. Transcoding server (optional)
    2.1.3. DVR server (optional)
    2.1.4. Origin server
    2.1.5. Creator control panel
    2.1.6. Viewer control panel
  3. Workflow
  4. Making a reliable service
    4.1. Several Publishing Points
    4.2. Loadbalanced Transcoding Cluster
    4.3. DVR Cluster
    4.4. CDN
  5. Plan and Deploy: Real-Life Example
    5.1. Analysis Stage
    5.1.1. Step 1. Define a Niche
    5.1.2. Step 2. Define Your Target Audience
    5.1.3. Step 3. Define the Type of Content and Profiles of Creators
    5.1.4. Step 4. Define the Key Features
    5.2. Estimate Technical Requirements
    5.2.1. Network
    5.2.2. Creator Control Panel
    5.2.3. Viewer Control Panel
    5.2.4. Ingest Server(s)
    5.2.5. Transcoding Server(s)
    5.2.6. DVR Server(s)
    5.2.7. Origin Server(s)

Introduction to UGC platforms

User-generated Content (UGC) is any form of content (videos, images, reviews, posts, etc.) created by people and published online.

UGC is a great marketing method for promoting products and services. It helps to build a brand community, gain consumers' trust, raise awareness, and engage with the audience. Encouraging consumers to share their experiences and thoughts about the product or service helps to establish an online presence, build the brand reputation, and boost conversion.

You can come across plenty of examples of user-generated content on the Internet. We watch it on a daily basis: video game walkthroughs, smartphone reviews, podcasts about finance and investing, recipes, style tips, etc. Webcam broadcasts can also be considered as UGC. For instance, webcam in the docks can show ships passing by. Webcam on the beach shows the current weather, surf conditions, and beach activity. There are even websites where you can watch what is happening in a particular part of the world in real time.
The amount of user-generated content is growing every day, as is the number of web resources where this content can be published. UGC platforms are required to gather, manage, explore and arrange this content.

User-generated content (UGC) platform is a software-as-a-service (SaaS) solution that is used to collect and manage the content shared by consumers.

UGC platforms may focus more on a particular form of content, like reviews, comments, ratings, visuals, etc. We will focus on live streaming and broadcasts — UGC streaming platforms, for instance, Twitch. Such platforms are used to stream and host videos for various purposes: from video game live streaming to business events.

The range of applications of UGC-platforms is limited only by the Internet availability. Today UGC-platforms are used in:

  • gaming (live broadcasts of tournaments and competitions in cybersports, walkthroughs and reviews of video games, etc.)
  • sports (live broadcasts and recordings of tournaments and competitions in various kinds of sports)
  • education (live broadcasts and recordings of lectures, seminars, webinars, etc., recordings of courses and specializations)
  • religion (live broadcasts and recordings of worship services)
  • events (presentation of a new smartphone model, broadcasts of cultural events, etc.)

Here are some of the features UGC platforms/services offer to their customers:

  • Streaming to multiple platforms
  • Monetization (subscriptions)
  • Live and VOD streaming
  • Pre-recorded live streams
  • Recording and storing streams
  • Live to VOD
  • Rewinding, and more.

Basic architecture design of a UGC platform

Let's dive into an architectural level of a UGC platform to see what is happening under the hood. In this section, you will discover:

  • stream journey from the author to the viewer
  • what elements make a UGC platform
  • how do these elements connect
  • how to make your system resilient and scalable

The diagram below shows a recommended UGC platform architecture design which, in our experience, allows you to build the most effective system:

Diagram 1. UGC platform architecture design

UGC platform architecture design

First, we will have a look at the key components that keep UGC platforms up and running.

Key components

Here are the key components that are used to build up a UGC platform/service:

  • Publishing server
  • Transcoding server (optional)
  • DVR server (optional)
  • Origin server
  • Creator control panel
  • Viewer control panel

We will explore every one of them more in depth.

Publishing server

Publishing server is required to receive a stream from a streaming software or a web browser. A publishing server takes an RTMP, SRT or WebRTC stream in and then transports it to either the transcoding server (if required) or the origin server.

You can multistream the content as-is to multiple streaming platforms, such as YouTube or Twitch, to increase your reach. This way, you provide viewers with the opportunity to choose the best streaming platform for watching.

Flussonic Media Server can receive streams from various sources and of different formats. For more information about the types of sources and their configuration in Flussonic, see Data source types.

To learn how to push the stream from OBS to Flussonic Media Server, see Publishing a stream from OBS Studio to Flussonic Media Server.

We summarized all the information in the table below:

Inputs RTMP, WebRTC, SRT streams
Features 1. receiving streams
2. sending streams to the transcoding or origin server
3. multistreaming to other streaming platforms and socials
Outputs RTMP, WebRTC, SRT, M4S* streams


M4S is a real-time streaming protocol for transmitting the video between Flussonic servers only. It is a codec-agnostic protocol, which means it supports any codec. Refer to M4F and M4S protocols to learn more about M4S.

Transcoding server (optional)

A transcoding server is essential, if you want your content to reach as many viewers as possible. Transcoding server takes the incoming stream from the publishing server and transcodes it into an array of streams at different resolutions and bitrates. It is used for adaptive bitrate streaming to deliver the content that matches the viewers’ available bandwidths and devices.

Let's look into what transcoding is and what it is not.

Transcoding implies both decoding and encoding. Decoding is a process of decompressing compressed data into a raw format. Encoding is a process of compressing raw uncompressed data to be sent over the Internet. During encoding, various video parameters, such as resolution, bitrate, and type of compression, are determined. Both encoder and decoder rely on codec — an algorithm of video/audio compression. A video codec affects file size and image quality. H.264 and AAC are the most commonly used video and audio codecs for live streaming.


Remember that different protocols support different containers and codecs.

So transcoding is a process of decoding the stream, performing some kind of modifications to the video and/or audio parameters of the content, and then re-encoding the stream to be transmitted over the Internet.
Transcoding is commonly referred to as an umbrella term for the following media tasks:

  • Transcoding

Transcoding refers to a process of changing a video and/or audio codec, or a type of compression. For instance, you can take MPEG2 video and convert it to H.264 video.

  • Transsizing

Transsizing refers to a process of resizing a video frame and modifying resolution, such as bringing down 3840×2160 (4K) resolution to 1920×1080p and/or 1280×720p, 640×480p, etc.

  • Transrating

Transrating refers to a process of lowering bitrates, such as taking a 4K video at 45 Mbps and converting it to one or more lower-bitrate streams: 4K at 15 Mbps, Full HD (1080p) at 5 Mbps, HD (720p) at 2 Mbps, 480p at 1 Mbps, etc.

One or a combination of the preceding tasks refers to transcoding.

Diagram 2. Example of transcoding

Transcoding diagram

After the video and audio streams are compressed with codec, they are packed into a container — a wrapper that stores the video stream, audio stream, and metadata (bitrate, resolution, subtitles, etc.). At this point, it makes a video file that can be further delivered via a streaming protocol, like HLS, MPEG-DASH, etc.

A process of changing container and streaming protocol without modifying the actual video and/or audio content is called transmuxing (also referred to as packetizing, repackaging, or rewrapping). Receiving an RTMP stream from an IP camera and transforming it to an HLS stream for playback is an example of transmuxing.

Transcoding ≠ transmuxing as they operate on different levels (transcoding — content/data, transmuxing — container and streaming protocol) and should not be confused.

Transcoding is a computationally expensive process, and it puts a heavy workload on the CPU and GPU, unlike transmuxing, which takes much fewer resources.

Suppose you want to stream a cyber sport event online in 4K at 45 Mbps using H.264 codec and SRT encoder. If you attempt to deliver such a stream to your viewers directly as it is, your audience will have trouble playing the stream. Here is why:

  • Not every device supports 4K resolution. Some displays do not support 4K resolution, which results in an inability to play the content.
  • Lack of available bandwidth to watch 4K. Unstable/poor internet connectivity or insufficient bandwidth can cause 4K chunks of data to load for a long time (or not load at all). It results in constant buffering for a player.

Consequently, without transcoding you will cut out almost everyone with slower internet speeds, tablets, mobile phones, gaming consoles, and STBs. Transcoding ensures your stream is watchable on lower-speed internet connections and various devices.

With Flussonic, you can do all the necessary transcoding and transmuxing operations to the video and audio streams.

For more information about transcoding and supported codecs and protocols, see Transcoding and Supported protocols and codecs.

To sum up:

Inputs RTMP, WebRTC, SRT, M4S streams
Features 1. changing audio and video parameters (codec, bitrate, container, etc.)
2. creating a multi-bitrate stream
3. overlaying logo
4. transporting streams via private IP network (M4S protocol highly recommended)
Outputs transcoded streams

DVR server (optional)

DVR server takes all incoming video streams and archives them for the VOD system. DVR server provides your viewers an opportunity to pause, rewind, fast-forward, re-watch, and record any live stream and watch it at a later time.

Flussonic Media Server provides a wide range of features to work with the archive, such as recording and saving live streams as a VOD file, i. e. live-to-VOD feature, broadcasting in different time zones, i. e. timeshift, etc.

For more information about DVR and its features, see: DVR.

DVR is optional in a UGC platform's workflow. If you do not want to provide the live-to-VOD feature for your viewers, then you can skip the DVR. However, nowadays, it is hard to imagine a UGC service without the rewinding feature.

In conclusion:

Inputs 1. streams from the publishing server (if transcoding server is not used)
2. transcoded streams (if transcoding server used)
Features 1. recording and storing copies of incoming streams
2. working with the archive and its features (live-to-VOD, timeshift, etc.)
3. playing streams from the archive (rewind, watching the current live stream from the beginning)
Outputs 1. transcoded streams (for live viewers)
2. copies of transcoded streams (for VOD users)

Origin server

Origin server delivers the content directly to the viewers. Origin server captures the streams from the transcoding server or the publishing server and viewers fetch streams directly from the streaming origin server.


Inputs 1. transcoded streams (for live viewers)
2. copies of transcoded streams (for DVR users)
3. streams from the publishing server (if no transcoding and/or DVR server used)
Features delivering content to viewers

Creator control panel

Creator control panel is an orchestration system that manages stream configuration, creator's authentication and authorization.

Before a creator starts streaming, they have to request a URL and a key from the streaming platform. Here is when the creator control panel comes into play. It verifies the creator's identity and if they have permission to stream. Once the author passes the authentication and authorization stages, control panel returns the URL and stream key.
The creator control panel uses the system management database. System management database stores the pipeline configuration, stream configurations, stream keys for creators, and session keys for viewers. The creator control panel defines the configuration for the servers in the pipeline based on the pipeline configuration and available hardware resources. Then it provisions the stream configurations to this pool of servers. Provisioning is a process of providing a server with configuration to run a piece of hardware resource in runtime. Provisioning is the process of turning the desired configuration into a real one and uploading it to a server.

Summarizing the above:

Inputs 1. requests for URLs and stream keys from creators
2. requests for session keys from viewer control panel
3. requests for stream configuration from the servers in the pipeline
Features 1. defining stream configurations
2. provisioning stream configurations to the servers in the pipeline
3. providing stream keys for creators and session keys for viewers
Outputs 1. URLs and stream keys for creators
2. session keys for viewers
3. stream configurations for a pool of servers in the pipeline

Viewer control panel

Viewer control panel is a system that manages viewer's play sessions, viewer's authentication and authorization.

To watch a live stream viewers should retrieve a playback URL. To do so, viewers request a playback URL from the viewer control panel. The viewer control panel then requests a session key for the streaming session from the system management database. If a viewer is allowed to access the stream, the system then returns a playback URL that can be opened in a browser or media player to watch the stream contents.

To sum up:

Inputs requests for session keys
Features providing session keys for viewers
Outputs session keys for viewers


In this chapter, we will delve into how the UGC platform works, specifically, what steps the content goes through to get to the viewer's screen.

Let's get back to our diagram of a UGC platform:

UGC platform architecture design


In the diagram above, we present our recommended UGC platform layout, which, in our experience, works most effectively.

1. Creator <-> Creator control panel

The communication between the creator and the creator control panel is bidirectional and is based on a request-response model. And here is how it goes:

  1. Requesting a URL and a stream key

Before the start of streaming, a creator needs to set up streaming software. And to finish that setup, a creator needs a URL and a stream key. The URL and the stream key let the software know where to send the content to (publishing point). So the creator requests the URL and the stream key from the streaming platform.

  1. Saving the stream configs

The creator control panel defines the configuration for an incoming stream based on the pipeline configuration and available hardware resources. Once the stream configuration is defined, it is saved to the system management database.

  1. Returning the URL and the stream key

Then the creator control panel returns the URL and the stream key to access the publishing point.

2. Creator control panel <-> Publishing server, origin server

Having saved the stream configuration in the system management database, the creator control panel provisions the static stream configuration to all the servers in the pipeline. It creates the publishing point of the publishing server to receive the incoming stream.

3. Creator -> Streaming software or Web Browser

Then the creator provides the URL and the stream key to the streaming software or a browser to start streaming.

4. Streaming software or Web Browser -> Publishing server

As the software setup is finished, the creator can start streaming. The stream is pushed to an publishing server via WebRTC (for browser), RTMP, or SRT (for streaming software).

5. Publishing server -> Transcoding server (optional)

The publishing server also multistreams to socials and other platforms, like YouTube, LinkedIn, etc.

Repackaging the incoming stream to the proprietary Flussonic-to-Flussonic M4S protocol, the publishing server pushes the stream to the transcoder, DVR, or origin server, depending on the pipeline.


One of the major features of Flussonic is that every time the Flussonic Media Server receives a stream, the stream is unpacked and then packetized again. If the input stream has minor issues, Flussonic makes the output stream more stable and the end-users happier with the QoE while playing the stream.

6. Transcoding server -> DVR server (optional)

A transcoding server receives the M4S stream and unpacks and decodes it, getting to the raw video and audio. Then it creates multiple renditions of the same stream at different bitrates and resolutions.

DVR server ingests M4S streams from the transcoding server and stores the copies of those streams.

7. DVR server -> Origin server (optional)

The origin server ingests M4S streams from the DVR server.

8. Viewer <-> Viewer control panel

The communication between the viewer and the viewer control panel is bidirectional and is based on a request-response model (same as between the creator and the creator control panel) and looks as follows:

  1. Requesting a URL to watch a streaming session

To watch the stream, a viewer needs a URL. So a viewer requests this information from a viewer control panel.

  1. Returning the URL

After the playback URL is created, the viewer control panel returns it to a viewer.

9. Viewer control panel <-> Creator control panel

The viewer control panel requests a session key (PLAY_KEY) from the system management database in the creator control panel to create a playback URL.

10. Players, Web Browsers <-> Origin server

Finally, the origin server delivers the streams to players and/or web browsers. Depending on the application, the viewer can open a URL in a browser or media player.

Making a reliable service

This section provides architecture design ways to build your system so that it can handle failures and scale in response to changes in workloads and user demands. It is essential for a reliable service to keep responding to customer requests in spite of a high demand on the service or a maintenance event. The following features of Flussonic Media Server will help you to create a stable system so that your service won't suddenly fail and stays available on every stage of a content delivery pipeline.

Several Publishing Points

To ensure the system is reliabe and functions effectively when accepting the publishing of streams, publishing point redundancy is needed.

The redundancy of publishing points implies allocating a separate pool of servers to receive publishing streams based on server load or the creator's geographical location. Here is how we have implemented the mechanism of publishing point redundancy in Flussonic Cloud:

We use at least two DNS entries so that the creator always connects to at least one active server. We also have a separate sub-domain for each project. This way, we can provide a pool of servers based on server load or, for example, the creator's geographical location.

As a result:

  • multiple DNS entries for redundancy
  • a separate domain for each project to balance requests for publishing streams more accurately and allocate resources.

Loadbalanced Transcoding Cluster

To create a redundant architecture for failover of transcoding server instances clustering mechanism is used. In case one transcoding server instance fails, other instances should be able to start the processes on their end and perform the transcoding. This way the service will keep working as expected.

To learn how to configure a redundant transcoding cluster in Flussonic, see Redundant transcoder configuration with cluster ingest.

The requests of publishing server instances should be loadbalanced between the cluster of transcoding server instances to prevent the overload. See Load balancing in Flussonic to learn how to configure loadbalancing in Flussonic.

DVR Cluster

It is critical for a reliable service to backup a DVR archive and to avoid the data loss because of a sudden outage, system failure or any other issue.

Flussonic provides a wide range of tools to backup a DVR archive and keep it available at all times, such as:

  • replication

A DVR archive is stored on two or more server instances, where one is a primary server and the others are secondary servers. Streams are pulled from the source and stored on the primary server to then be replicated to the secondary server.

To learn more about replication and its configuration, see Replication

  • cross-replication

Two servers, a primary and a secondary one, are used to record the archive. Both of these servers can access the source and retrieve streams from it, as well as retrieve the archive from each other. Cross-replication allows you to restore missing parts of the archive after the temporary unavailability of one of the servers and synchronize data between them. Thus, continuous availability of the archive and data redundancy is ensured.

Refer to DVR Cross Replication to learn more about cross replication and its configuration in Flussonic.

  • Flussonic RAID

Flussonic RAID is an application-level RAID (Redundant Array of Independent Disks) offering high reliability, efficiency, and convenience when writing video data to dozens of disks, creating a single array. RAID allows you to increase performance and data redundancy of a system as well as increase storage capacity.

To learn more about Flussonic RAID, its features and configuration, see Flussonic RAID for DVR.

  • cluster DVR and segmented cache

To record the archive several DVR-servers are used, one of which is the primary, and the others are secondary. According to statistics, up to 90% of all views of live broadcast recordings occur in the next 24 hours after the live stream, so when broadcasting large-scale events it is necessary to use SSD to reduce the load from HDD. Flussonic can use a separate segmented cache on the SSD to take the load off the HDD. You can use the segment cache on a secondary server to store the DVR, but the primary server will actually manage the entire archive.

To learn more, see Cluster DVR.


With the increasing number of viewers and amount of output traffic, at some point one server becomes incapable of handling the delivery of streams anymore. As the viewers from all over the world tune in, it becomes challenging to provide them with a good and reliable service. It makes it overwhelming for an origin server to serve more and more requests at a time.

Here comes CDN (Content Delivery Network). Having multiple servers distributed across various geographic locations, CDN provides an efficient delivery of the content worldwide, narrowing the distance between the edge server and the viewer and, hence, minimizing latency and allowing faster access to the content. CDN caches the content in different PoPs (points of presence) to deliver it directly to the viewer. CDN also improves the overall streaming performance by distributing the workload over multiple servers and unloading the origin server to keep it up and running.

Thus, CDN provides availability, reliability, scalability and efficiency, while delivering the content to the viewers.

To learn more about setting up CDN in Flussonic, see Setting up CDN. Flussonic also works with other CDNs, like Akamai, refer to Example: pushing a stream to the Akamai CDN to learn more.

Plan and Deploy: Real-Life Example

This chapter is devoted to developing a strategy for a small UGC streaming service — a virtual event platform. You will explore the business analysis stage as well as the technical requirements for the service.


Any programming code or config for this project will not be provided.

Analysis Stage

In the analysis stage, you'll figure out what you're going to do, for whom, and how. Let's move step-by-step:

Step 1. Define a Niche

The first step in building your own UGC streaming service or platform is to choose the kind of market area your service will serve to. Is it going to be health & fitness, online learning, arts and crafts or gaming, and so on? Will it cover only one use case or several ones? Defining your niche is crucial for a UGC streaming service or platform to know the main purpose of your service.

The example platform that will be created along the line is focused on virtual events like virtual conferences and summits, webinars, online workshops, virtual corporate events, and so on for both big and small companies.

Step 2. Define Your Target Audience

Once the niche is found, you need to define who your audience is and what they want. You have to answer the questions like:

  • What do your viewers have in common?
  • What does your audience get from watching the content (education, entertainment, sport, etc.)?
  • How do viewers prefer to consume it (laptop, mobile phone, and so on)?

By answering these questions, you will get a better understanding of your audience and what they expect from your UGC streaming service or platform.

For the virtual events platform, the audience depends on the type of hosted event. If it is a corporate event, then the audience includes employees, stakeholders, board members, or even clients and customers.

Step 3. Define the Type of Content and Profiles of Creators

As the niche is found and you know what your target audience is, it is time to determine what type of content you want to provide and who will be creating this content. Determine what your target audience needs and their preferences to define what content they will watch.

Let's consider Twitch, for example. Twitch was initially created as a live streaming platform for gamers. So the primary types of content are video game live streaming and eSports competitions. In this case, gamers and tournament organizers are the content creators.

Virtual event platform is focused on various virtual or online events: from conferences and webinars to corporate parties. The content creators are the ones who host events — separate employees, companies, or groups of companies.

Step 4. Define the Key Features

When creating a service or a platform, you must provide features for your customers that meet their needs and demands.

Below are some of the features of UGC platforms:

  • support for using VMix, OBS, Atemi, HDMI-RTMP converters, video editing consoles, and various browsers to publish streams to the server
  • recording live streams and storing them
  • multistreaming to social media
  • failover and backup
  • preparing multi-bitrate streams
  • scalability and load balancing
  • accessibility from any device and browser
  • data and analytics
  • embedded media player
  • uploading pre-recorded streams
  • authorizatoin system
  • monetization
  • subscription system, and so on.

Depending on the goals and objectives of the platform/service, allocated budget, and available hardware and software resources, the set of features will vary.

For example, our virtual event platform will:

  • support VMix, OBS,
  • record live streams and store them so the viewers can watch them later
  • provide failover and backup to keep the platform/service up and running if any issues occur
  • prepare multi-bitrate streams to ensure viewers with slow internet connection can still watch the stream
  • manage the available resources efficiently at uneven server loads
  • make watching the content accessible from any mobile devices and browsers
  • provide statistics of a system performance
  • authenticate and authorize viewers and creators.

Estimate Technical Requirements

Once the analysis stage is finished, you can begin to draw up the technical requirements for the project. This part is dedicated to:

  • Estimating the required input and output network bandwidths for each step of the video delivery pipeline (ingest server, transcoder, DVR, origin server) on the example of an online event broadcasting platform
  • Building a network architecture diagram using online event broadcasting platform as an example

It is essential to gather the technical details about the project to create a network architecture diagram and estimate the required input and output network bandwidths. Here we provide the questions arranged by topics to help you to gather that information:

Table 1. Questions to determine the network requirements

Topic Questions
Source (input) 1) What is the type of signal source (camera, software encoder, hardware encoder, browser?
2) What are the parameters of the input stream (codec, bitrate, resolution, FPS)?
3) Which streaming protocol is used for transmitting video from the source (RTMP, SRT, WebRTC, and so on)?
4) What is the total number of incoming streams?
Transcoder 1) What are the output video stream profiles (codec, bitrate, resolution)?
2) What is the total number of output streams for distribution?
3) Is audio transcoding required? If yes, what are the output audio stream profiles (codec, bitrate)? (For example, Opus (WebRTC) -> AAC (HLS, DASH))
DVR archive (recording and storing) 1) Is stream recording required? If yes, how do you plan to use DVR (export recordings to VOD, implement catch-up, etc.)?
2) What is the required depth of the archive (one day, week, month)?
3) Where will streams be stored (cloud storage, disk, RAID)?
Origin server (output) 1) What are the types of client devices (smartphones, PCs, and so on)?
2) What is the streaming protocol (HLS, MPEG-DASH, WebRTC, etc.) for content distribution?
3) What is the estimated number of viewers?
4) Are you planning to distribute content using your own resources or third-party CDN (Akamai or others)?
Security 1) Is authorization needed? If yes, by what means: external backend or built-in Flussonic tools?
2) Is content encryption necessary?
Other requirements 1) Are audio tracks with multiple audio tracks with different languages?
2) Do you need additional control of any video parameters?
and so on.

For our online event streaming platform, the table will look like so:

Topic Description
Source (input) 1) Source type: a hardware encoder or a software encoder like vMix.
2) Stream parameters: H.264, 5 000 Kbps, 1920х1080p, 25 FPS.
3) Streaming protocols: RTMP or SRT.
4) Number of input streams: up to 10.
Transcoder 1) Transcoding in three profiles: Full HD (1080p), HD (720p), SD (360p).
2) Number of output streams: up to 30.
3) Audio transcoding is not required.
DVR archive (recording and storing) 1) Recording and storing of streams is required. Viewers will have the opportunity to watch an event wherever and whenever they choose and to download the recording to their device.
2) Up to 10 hours for all 10 streams in a separate data center. It is neccessary to be able to export the archive fragments as MP4 files.
3) S3 cloud storage is used to store the recorded events.
Origin server (output) 1) Client devices: PC and smartphones.
2) Streaming protocol: HLS.
3) Estimated number of viewers: up to 100,000.
4) Using CDN Akamai to deliver the content.
Security 1) Authorization of users is required.
2) Encryption is not neccessary.
Other requirements 1) It is neccessary to be able to create streams via the API.

Based on the data above, let's make a network diagram:

Pic 3. Event streaming network architecture diagram

Event streaming network architecture diagram

The diagram above shows that four Flussonic servers are required. Two servers receive and transcode the streams (transcoding server №1 and transcoding server №2), and the other two (origin server №1 and origin server №2) record the streams and deliver the content to the clients.


Let's start by calculating the network bandwidth for the input. The data above will serve as a basis. The total value of the input bandwidth will be:

10 streams * (5,000 + 192)Kbps ≈ 52 Mbps

Let's calculate the output bandwidth of the transcoder. After transcoding ten Full HD (1080p, 5,000 Kbps, H.264) streams into three video profiles and one audio profile each:

  • Full HD (1080p), H.264, 4,000 Kbps; AAC, 192 Kbps
  • HD (720p), H.264, 2,000 Kbps; AAC, 128 Kbps
  • SD (360p), H.264, 1,000 Kbps; AAC, 96 Kbps

It makes:

30 * (4,000 + 2,000 + 1,000 + 192 + 128 + 96)Kbps ≈ 223 Mbps

Now let's see how much output bandwidth will be used at the playback stage. Suppose all 100,000 viewers watch the Full HD stream at the same time, the value of the output bandwidth will be:

100,000 * (4,000 + 192)Kbps ≈ 420 Gbps

Creator Control Panel

The creator control panel is used for authorizing creators and providing URLs to publish streams to the publishing points. You can implement the authorization by means of Flussonic or with an external backend.

If you do not have a backend for authorizing creators and/or do not want to create one, you can use the Flussonic built-in tools, such as:

If you already have an authorization backend, then you can configure Flussonic to access it. To connect Flussonic to an external backend, Flussonic has a mechanism of authorizing publishing sessions via an external backend. Flussonic accesses the specified backend to check if a creator has a permission to publish the content.

Viewer Control Panel

A viewer control panel authorizes viewers and provides them with a link to play the stream. Viewer authorization can be implemented using Flussonic or with an external backend.

If you already have a backend to authorize viewers, then you can configure Flussonic to access it using one of the following methods:

Flussonic also has mechanisms to authorize viewers without using a backend:

Our platform will authorize viewers by their specified token. External backend is not needed. A website generates a unique token for each viewer so that the viewer cannot pass it on to others. Flussonic then checks the token, the IP address and other parameters. This method is great for running closed events.

Ingest Server(s)

Ingest servers should support a set of streaming protocols that will be used to receive a publishing stream from a source. Flussonic Media Server supports RTMP, SRT, WebRTC (WHIP) (learn more at Publishing video to the server) and others for publishing streams.

Supporting RTMP for publishing streams allows creators to use open-source solutions and almost any equipment as a source. SRT provides seamless, lossless, real-time stream publishing with low latency. WebRTC (WHIP) support for publishing streams allows creators to use browsers instead of special hardware and software.

For our project we use transcoding server №1 and №2 to receive RTMP and SRT streams and transcoding them. Transcoding server №1 is the main server, and transcoding server №2 is the standby one. If the main server becomes unavailable, the standby server takes its place. If both main and standby servers become unavailable, the fallback video file will start on a loop until one of the servers goes back online. The fallback video will also be written to the archive. It ensures that a system is fault-tolerant and can continue working in case of emergencies.

Transcoding Server(s)

Transcoding is the most computationally expensive process in the whole video delivery pipeline.

Transcoding can be done with a specialized dedicated hardware, a CPU or a graphics card (external or internal). Flussonic Media Server has a built-in transcoder. It supports transcoding with a GPU or a CPU. You can use your own server or rent it to use Flussonic.

There is no hardware configuration that fits every use case. There are many factors to consider, for example, available resources, allocated budget, profiles of input and output streams, etc. Therefore, the hardware configuration is chosen separately for each project. If you plan to buy your own server, we can offer you the following solution. Take any server capable of transcoding 5-20 channels and perform stress testing. If you are not satisfied with the result, take another server and test it the same way. Repeat the steps until you reach the desired performance.


If you need an expert advice and assistance when it comes to choosing the right equipment for your project, contact our technical support team at

We offer an appliance server for transcoding streams — Flussonic Coder. Learn more about the features on our website.
A single Flussonic Coder unit can transcode 48 Full HD (FHD) streams into three profiles (FHD, HD, and SD) and consists of eight NVIDIA Jetson modules. One NVIDIA Jetson module is capable of transcoding approximately six Full HD streams. We recommend that you perform stress testing in your environment to determine the exact number of Flussonic Coder units or its NVIDIA Jetson modules needed for optimal performance, and to see if the performance of the Flussonic Coder suits you.


If you need an expert advice and assistance to configure Flussonic Coder, contact our technical support team at

To transcode ten Full HD (1080p) streams into three profiles (1080p, 720p, 360p) two NVIDIA Jetson modules are needed.


If you want to try Flussonic Coder in action, fill out the form on our website to request a demo.

To ensure that the system is fault-tolerant and reliable, we enable transcoder redundancy. So if the main transcoding server fails, the standby server will start working.

DVR Server(s)

To store the recordings of live events local and cloud storages are used. We leave the choice of using a local or cloud storage for your project with you.

Flussonic Media Server can work with cloud storages. It can both upload files to the cloud storage and download them from it. Flussonic Media Server also allows to export archive fragments in MP4 file format. For details, see Export DVR segment to MP4 file and Download the DVR segment to MP4 or MPEG-TS file to a local computer.

The highest number of views of the recording of live event is reached in the first 24 hours after the event. If viewers cannot join the live event on time, they have to have an opportunity to watch the recording of the live stream later. This technology is called catch-up (learn more at Catch-up TV). The more viewers request a recording, the more is the load on the storage. To reduce the load on the storage and speed up the distribution of VOD content, it is necessary to set up an SSD caching.

For our example, the process of recording and storing the streams is as follows:

Two servers (origin server №1 and origin server №2) are used to record live events and write them to the archive while also delivering the streams to clients. Servers capture streams from the transcoding server using the M4S protocol and record them. After the live event is finished, Flussonic automatically uploads the recordings as MP4 files to S3 cloud storage. This way the viewers can watch the recordings later. To speed up the delivery of VOD, we enable SSD caching. The first time an MP4 file with an online broadcast recording is requested, the file will be cached locally on the SSD. All subsequent requests will be served from this SSD. If the file in cache is not accessed for 24 hours, the file is removed from cache. If the total size of files stored in cache exceeds 100 GB, then Flussonic deletes files from the cache, starting with the oldest one. The maximum cache size and how long the file is stored in cache (cache expiration time) can be adjusted to your needs.


If you need an expert advice and assistance in setting up a distributed DVR service, contact our technical support team at

Origin Server(s)

You can build your own CDN or use existing solutions, such as Akamai, CDNvideo, and others, to do the job. It depends on the number of viewers, their geographical location, allocated budget, etc. With CDN, you can increase the number of viewers without buying more hardware. It is also necessary to balance the load and distribute requests evenly between the servers.

We use two origin servers — origin server #1 and origin server #2 in our project and a hybrid approach to load balancing:

  • Load balancer on the side of CDN Akamai is used to handle play requests for large public events. CDN Akamai captures the HLS and LL-HLS streams from the origin server №1 and origin server №2.
  • Flussonic load balancer is used to distribute the load between the two origin servers for small events. In this case, origin server #1 serves as a load balancer.

HLS and LL-HLS streaming protocols are used to deliver the streams to viewers. LL-HLS is used to deliver live streams, and HLS is used for VOD.