Input Monitoring¶
The Input Monitoring dashboard is one of the most important in the entire Retroview service, as it determines everything that will happen to the signal later.
Based on our 15 years of experience, most video processing issues resolve themselves if you fix the input problems. A huge part of our complex code is written around handling input problems that cannot be fixed.
Problem Stream Indication¶

This is the most important graph for monitoring input status. From all streams in your service (from units to hundreds of thousands), the most problematic ones are displayed here. You can then select from the top and analyze details.
It's important to use different time ranges: small ones to see recent issues, and many days to see daily periodicity.
After getting the initial picture, you need to select a specific stream and study the situation on it:

What Evening Internet Peak Looks Like¶

On such a graph we can see that every evening approximately from 7 PM to 1 AM there are reception problems. The reason is very simple: traffic from the source goes through the same network as serving this internet provider's users, so their evening internet consumption leads to packet losses from surveillance cameras.
The solution is simple: separate the network either physically or through VLAN, QoS, and other approaches.
Episodic Network Failures¶

This is what periodic network bandwidth failures look like. Characterized by simultaneous error growth on many channels at once. Most often you need to examine load indicators on switches and expand network capacity.
What Provider Failure Looks Like¶

This is what a channel provider failure looks like, which needs to be fixed with a phone call. Everything was normal and suddenly broke. For example, this could happen if equipment broke or something happened with descrambling settings.
If subscribers aren't calling with this picture, perhaps you don't need these channels.
Set up alerts: To instantly know about mass problems, configure a mass stream failure alert. For critical channels, set up a specific stream failure alert to react before subscribers call.
Single Stream Status¶
Below in the Stream details group there are several graphs: errors, traffic, warnings, dvr. They are all needed for systematic work with streams. We separated errors and warnings about problems so you can more easily distinguish critical issues from potential ones.
Stream Errors¶
The most important graph is errors. They should be at zero and this is achievable. Anything above zero is a problem that needs to be solved. In most cases, you can be confident that an error on the graph means picture breakup or loss of audio/subtitles, etc.

This is what a bad camera stream looks like, which needs to be improved by fixing the network between camera and media server.

This is what SRT losses look like, which produce each other: packet loss leads to CC errors. Perhaps if you eliminate losses (i.e., channel bandwidth shortage), other errors will go away too.

This is what TV signal reception with a large number of losses looks like. There's no point showing such video to clients, but in practice they often try to transcode it and ask for help with poor picture quality. You need to fix reception.
Set up alerts: Increase in bad streams alert will warn about quality degradation before subscribers start complaining about video artifacts. Unstable streams alert helps detect periodic network issues like evening peak or episodic failures.
Known Errors List¶
Below is a list of known errors that can be seen on the graphs.
- lost_packets - on SRT, RTSP, WebRTC, ST2110 protocols you can reliably count the number of lost packets
- src_404 - number of times the source returned HTTP 404 (or other). Change/fix source
- src_403 - 403 code on source, i.e., no authorization. Change authorization key
- src_500 - error 500 on source. Fix source urgently
- broken_payload - this error occurs on different protocols. For example, in SDI it means a broken frame, when unpacking RTP this counter can increase from broken H264/5 NAL-unit structure. If there are no packet losses but the counter is growing, the provider needs to fix the source
- dropped_frames - this can occur when capturing MPEG-TS stream, when one track is declared and disappears for a long time. The track reordering mechanism cannot wait for it and drops the entire accumulated queue. Contact provider to fix source
- ts_stuck_restarts - some cameras reach the maximum packet timecode value and then instead of resetting, start sending the maximum value. This error is indicated here, fixed by reconnecting to camera. Contact camera manufacturer
- desync - indication that the byte stream input lost the structure of packet start and end and it's necessary to drop bytes until synchronization reappears, i.e., clear frame structure
- ts_pat - there was no PAT in the incoming MPEG-TS stream for a long time
- ts_pmt - there was no PMT in the incoming MPEG-TS stream for a long time. Specified separately by stream PIDs
- ts_service_lost - how many times MPEG-TS PAT was received with programs that don't exist
- adaptation_broken - how many times MPEG-TS adaptation field was invalid. As a rule, this error means complete garbage on input
- ts_scrambled - MPEG-TS stream is encrypted. Urgently figure out CAM modules
- ts_cc - incorrect MPEG-TS Continuity Counter, most likely packet loss. Fix network
- ts_tei - received MPEG-TS packet with error indication explicitly set by previous source. Deal with that source
- ts_psi_checksum - checksum mismatch in MPEG-TS packet service structures. Fix source or antenna
- broken_pes_count - PES packet inside MPEG-TS didn't start with startcode. Fix bad source
- discarded_buffer_count - how many times data in MPEG-TS was discarded because a frame couldn't be assembled from it. Fix source
- ts_crashed - internal error occurred in media server when processing MPEG-TS. Contact technical support
- too_large_dts_jump - time jump of more than 30 seconds occurred in MPEG-TS stream. This can occur, for example, because the source is formed from files that are rotated without time restamping. Another reason is mixing audio and video from different streams. If such errors are rare, playback violation may be only local, but in any case such stream should be fixed
- rtp_pt_reject - in RTSP (RTP) stream packets arrive with unknown, foreign Payload Type and have to be dropped. Demand camera firmware update
- dts_stuck - RTSP camera started returning the same timestamp. Replace camera or update firmware
- discarded_not_allowed_nal_count - unsupported H264 packetization type received, for example STAP-B or MTAP. Most likely update firmware or replace camera, but if you're confident this is what you need - contact support
Stream Input Warning Details¶
Warnings that are corrected by the server:
- ts_stuck - TS stuck issue restarts
- sr_ts_stuck - SR packets with repeated RTP timestamp
- sender_clock_deviation - Sender clock ahead/behind server time
- ts_goes_backwards - Time jumped back on the channel
- ts_jump_forward - Time jumped forward
- no_marker_mode_flag - Decoder works in no marker mode
- fu_pattern_is_broken_count - Broken FU pattern
- fu_has_both_start_end_bits_count - FU with both Start and End bits set
- fu_end_then_middle_workaround_count - FU workaround applied
- dts_stuck - Repeated DTS
- dts_goes_backwards - DTS jumped back
- dts_jump_forward - DTS jumped forward
Set up alert: Increase in offline streams alert allows you to react to a growing problem before it affects a critical mass of channels.
DVR Recording Status¶
The "DVR recording issues" graph shows recording problems.
This doesn't directly relate to capture, but the data itself will become unusable if there are errors on this graph.

Pay attention to the following errors:
- discontinuity - means there are gaps in the stream. During playback there will be no seamless playback. Fix source
- failed - storage write attempt ended with error. Replace hard disk urgently
- skipped - couldn't write in time and started dropping segments. Abandon network storage, reduce hard disk load, abandon hardware RAID and switch to Flussonic RAID
- slow and delayed - means write was successful but dangerously long, more than half the segment duration itself
- collapsed - had to write several segments together. Not fatal, but this system won't handle more load growth