Flussonic Media Server documentation

CDN Organization

When one server for distribution of video is no longer enough, one has to organize a content delivery network (CDN).

Flussonic has a number of features to simplify this task. Surely, this article cannot claim to be a detailed instruction about organizing an income-generating CDN, but we can provide some pieces of advice about how Flussonic may be useful.

In this article, we will consider a small network of 3-10 servers broadcasting live shows.

Regional distribution Anchor Anchor x2

We will consider a situation when a video is captured from a satellite in Russia/Europe and transmitted to Europe/America for re-translation.

The videos will have to be transmitted to long distances via public Internet, therefore it will be impossible to guarantee the quality of the channel.

The organization will be as follows:

  • in the capture region, there will be at least two redundant servers
  • in the region of broadcasting, the servers will capture video from one of two sources
  • each channel will be transmitted between the regions only once, in order not to generate extra traffic
  • some channels that are rarely used will be transmitted only upon user request
  • in the capture region, video will be recorded in order to prevent losses in case of channel outage
  • in the broadcasting region, video will also be recorded for archive distribution.

Using this scheme, we will show Flussonic's capabilities.

Capturing Anchor Anchor x2

Various configurations may be made for capturing streams in the network, and their configuration depends on whether the video may be taken from the source several times, or not.

In the easiest case, if you have a video coming in a multicast via UDP, you can just configure capturing the same video from different servers (further named as grabber1.cdn.tv and grabber2.cdn.tv):

http 80;
cluster_key mysecretkey;

stream ort {
 url udp://239.0.0.1:1234;
 dvr /storage 3d;
}

Here and further on, we mean that the servers have correct hostnames that can be resolved.

Also, there is an important point with a single cluster key on all servers. Here we have chosen mysecretkey, but it may be changed.

In this mode, the capturing servers run completely independently, the archive is written to both servers, and both servers are constantly available. However, this scheme requires multiple capturing from the source, while it is not always convenient or possible. For example, if a package of channels received via HTTP fits 500 to 800 Mbit/s, double capturing may require serious extending the input channel above one Gbit/s.

If you do not wish to capture the video from the source several times, you can configure cluster capturing.

The same config is added to the capturing servers with the stream:

http 80;
cluster_key mysecretkey;

stream ort {
 url tshttp://origin/ort/mpegts;
 cluster_ingest capture_at=grabber1.cdn.tv;
 dvr /storage 3d;
}

With such a config on both capturing servers, all videos will be captured by a single server, the second one will run in hot standby mode. The capture_at option specifies to the servers that grabber1 is first priority for capturing. If it is not specified, the stream will be uniformly distributed between the servers, which can also be a good idea.

If grabber1.cdn.tv fails, grabber2.cdn.tv will react to it, and will automatically add the streams.

In this configuration, the second server is idle, its archive is not being written, and it will start only if the first server is down.

If the archive should be completely backed up, a different configuration is required.

If you wish to keep a single point of video capturing, but you wish to have a redundant archive, the second server should constantly pick up and write streams. To do so, different configs should be made at different servers.

At grabber1.cdn.tv, the configuration will be as follows:

http 80;
cluster_key mysecretkey;

stream ort {
 url tshttp://origin/ort/mpegts;
 dvr /storage 3d;
}

Video is captured from the source and written to the hard disk.

At grabber2.cdn.tv, the configuration will be another:

http 80;
cluster_key mysecretkey;

stream ort {
 url hls://grabber1.cdn.tv/ort/mono.m3u8;
 url tshttp://origin/ort/mpegts;
 dvr /storage 3d;
}

grabber2 will try to capture the video from the first server, but if it is down, it will access the source directly.

Transit from capturing to streaming Anchor Anchor x2

From the point of view of the servers located in the distribution region, the capturing servers are the source that usually cannot be captured more than once, so you can use the advice about distribution.

However, there is no need to configure all channels manually and keep an eye on them. You can use Flussonic capabilities instead.

At the streamer1.cdn.tv server, which is receiving the captured video, it is sufficient to write the following into the configuration file:

http 80;
cluster_key mysecretkey;

source grabber1.cdn.tv grabber2.cdn.tv {
 dvr /storage 7d replicate;
}

With this configuration, Flussonic will pick up the channels from one or another server, write them locally to the archive and, if necessary, spool the data available remotely, but absent locally.

If some channels are not needed for continuous operation, they may be labeled as channels on request:

http 80;
cluster_key mysecretkey;

source grabber1.cdn.tv grabber2.cdn.tv {
 except ort 2x2;
 dvr /storage 7d replicate;
}

Distribution Anchor Anchor x2

In case of distributing a large amount of video content, there is a need to solve the problem of load distribution.

It is optimal where middleware is engaged in distribution. This is the most reliable scheme from the point of view of the clients (not all of them support redirects), but you can use other options, as well.

It makes sense to organize the streamers same as the transit, but the content should be picked from the local servers:

http 80;
cluster_key mysecretkey;

source streamer1.cdn.tv streamer2.cdn.tv {
 cache /cache 2d;
}

In this case, we have engaged a segment cache, rather than DVR. Flussonic will put the segments into the cache and, if necessary, distribute them from there. Sure, it makes no sense to place the cache on spindle drives, only SSD should be used.

Live broadcasts are still served from the memory and take 10 gigabits without problems, but cache from a single SATA SSD is limited by 6 Gigabit SATA bus. This may be solved by making a RAID 0 of several SSDs.

The important point here is that the segments captured by the grabber will reach the last streamer in the chain without changes and with the same names, and will remain in the same form for both live broadcasting and the archive. This behavior significantly differs from that of other video streaming servers.