The Video from an IP Camera is Distorted. Why?
The majority of cheap IP cameras (below $200) are made in China on a limited amount of factories and have firmware from 1-2 suppliers with the same network-related bug.
This bug makes quality of video picture dependent on network conditions. You watch the video from the camera in office using VLC and everything is OK. Then you move the camera to the street — and video starts breaking.
The same may happen when one checks the video via the native application: it shows an ideal picture, while Flussonic shows broken video.
Problem happens because of single bug that appears in millions of sold cameras: they have the same firmware supplier.
On-board RTSP streamer switches network socket to non-blocking mode. In this mode Linux will copy data from application to output network buffer not more than space left in buffer. Non-blocking socket will not stop program till all data is sent to network, but will immediately return amount of written bytes that can be smaller than requested to send.
So RTSP streamer prepares packet about 1450 bytes to send, writes it length to socket, starts sending and writes only 300 bytes.
This is a proper behaviour for event-oriented style of programming: program must keep track of sent bytes and buffer them for later delivery. However these cameras are using very old live555 server from 2005 year that don't implement this behaviour, so unsent bytes are just lost. This is very interesting because such badly implemented program can lose data while using TCP connection that guarantees data delivery.
RTSP client implemented by standard must close connection immediately after such data loss, so it means that it will happen each 3-10 seconds on a loaded network.
RTSP client that has workarounds for this bug will try to restore connection and use a lot of CPU for this. It can restore connection, but it cannot retrieve lost bytes so video starts breaking.
You can catch this moment with tcpdump: when camera receives signal of receiver buffer overflow, video immediately breaks.
When you use native application provided with camera, you use not RTSP but proprietary protocol that has better implementation: Chineese engineers take more care of it.
Majority of such data loss will get on keyframes because these frames are big (so statistically they will more often get buffer overrun) and because traffic has spike while sending large keyframe. Traffic spike leads to buffer overrun and this is why you will see errors in bottom half of video.
Errors in keyframes means that you will not get full quality of video because base frames are broken.
How to fix it
It is very hard to fix this problem on server side and we do in Flussonic as much as we can: large input buffers, network stream smart restoring, etc.
Also you need to take care of your network: don't allow overload, take care of microbursts, try to refuse from wireless links.
Use Flussonic Agent
The best solution here is to use our on-camera agent. It is very important to replace L2 transport to L7, so that camera RTSP streamer never receives notificaion of buffer overrun.
Our agent is written so that will always read data from camera and will not lose anything.
The same effect may be achieved by using alternative cloud technologies (but not with p2p) or with ssh tunnelling.