Video analytics installation¶
We refer to video analytics as the process that, using computer vision algorithms, records episodes with a description of objects in the frame. To use it, you should install special software. This software will henceforth be referred to as the analytics module.
Note
This guide is for new installations only.
To update the module from flussonic-vision package version 23.12 and older, follow the instructions Video analytics upgrade from version 23.12 and older
The analytics module requires a significant amount of computational resources, therefore we strongly recommend dedicating a separate server and utilizing hardware accelerators.
Note
For testing purposes, it is permissible to:
- run the analytics on the same server where the IP cameras are ingested and recorded
- run the analytics on a CPU
Analytics performs well on a CPU, but it does not leave free resources for other programs.
Requirements to the server for video analytics¶
- OS: x64 Ubuntu 20.04 or 22.04.
- GPU: NVIDIA with at least 6 GB VRAM.
- Processor: at least 4 cores.
-
Memory: at least 8 GB RAM.
The main characteristic when selecting a compatible graphics card is the Compute Capability version: 6.1 or higher is required. You can select a graphics card on the official NVIDIA website.
Installing video analytics¶
Run the commands:
apt update
apt install --no-install-recommends nvidia-driver-525
apt install flussonic-vision
After installing the packages, go to the Watcher UI and add new streamers with the analytics role as described further.
Adding analytics to Watcher¶
The analytics module consists of two components: Inference and Identification. One is needed for "capturing images" from the stream, and the other for their identification.
If you have only one analytics server, then install both components. If you have multiple servers, it is not mandatory but it is advisable to install both components on each server: this enhances the system's performance and fault tolerance.
Add Inference¶
- Open the Streamers page.
- Click + to add a streamer.
- In the form, specify the server name. You can use the actual hostname, or any arbitrary string. Specify
inference1
, which will work for the first installation. - Choose the Inference role. Save.
- Go to the streamer settings page and specify the server's
API_URL
:http://secret@vision.example.com:9030
Remember the name inference1
and the API key secret
as you'll need those in the last step.
Adding Identification¶
The actions are similar to adding Inference, but you should specify a different role and port.
Note that the server name must be unique. If you installed two components, they will appear as separate streamers in Watcher.
- Open the Streamers page.
- Click + to add a streamer.
- In the form, specify the server name. You can use the actual hostname, or any arbitrary string. Specify
identification1
, which will work for the first installation. - Choose the Inference role. Save.
- Go to the streamer settings page and specify the server's
API_URL
:http://secret@vision.example.com:9050
.
Remember the API key secret
as you'll need it in the last step.
In this example, secret
is the API_KEY
and it is specified as part of the URL. Do not enter it in the Cluster key field.
Example configuration for a test environment¶
- Hostname:
streamer.lab
. API_URL:http://streamer.lab
. Role:Streamer
. - Hostname:
inference1
. API_URL:http://secret@vision.lab:9030
. Role:Inference
. - Hostname:
id1
. API_URL:http://secret@vision.lab:9050
. Role:Identification
.
Video analytics settings¶
On the analytics module server, set the config in /etc/vision/vision-inference.conf
and /etc/vision/vision-identification.conf
files.
Inference settings¶
Open the configuration file /etc/vision/vision-inference.conf
and specify the following:
API_KEY
that you used in the previous steps.HW
. By default,HW=cuda
is specified in the config. This means that the module will try to work with the NVIDIA video card. SpecifyHW=cpu
if you don't have a video card yet.- In
CONFIG_EXTERNAL
, specify the address of the server with Watcher so that the analytics receives information about which cameras to analyze.
In the below example, you should change the following data:
CENTRAL_KEY
is the API key that can be found in/etc/central/central.conf
.watcher.lab
should be your real Watcher hostname.NAME
is the server's name. In the example above we usedinference1
.
HTTP_PORT=9030
API_KEY=secret
HW=cuda
CONFIG_EXTERNAL=http://CENTRAL_KEY@watcher.lab/central/api/v3/streamers/NAME/streams
HTTP_PORT
should be the same as you used in Watcher for the streamer with inference
role
Identification settings¶
Open the configuration file /etc/vision/vision-identification.conf
and specify the following:
API_KEY
that you used in the previous steps.- In
CENTRAL_URL
, specify the address of the server with Watcher so that the analytics receives information about which cameras to analyze.
In the below example, you should change the following data:
CENTRAL_KEY
is the API key that can be found in/etc/central/central.conf
.
HTTP_PORT=9050
API_KEY=secret
CENTRAL_URL=http://CENTRAL_KEY@watcher.lab/central/api/v3
HTTP_PORT
should be the same as you used in Watcher for the streamer with identification
role.
Startup and testing¶
After this, you can run both components:
systemctl start vision-inference vision-identification
If all the settings are correct, you will see green indicators on the Streamers page in the UI.