Download OpenAPI specification:Download
Inference server detects and tracks objects of interest (people, faces, vehicles) on video and forms episodes.
See Flussonic Vision Identification Server
API for more information regarding face identification and persons.
Inference server obtains the list of streams to be processed from the external configuration backend API.
In most cases this external configuration is provided by Flussonic Central
which helds the entire cluster configuration.
Inference server captures and decodes configured streams to get raw, uncompressed frames. Because processing is computationally expensive, it cannot handle every frame from every configured stream. Instead, the Inference server processes the most recent frame from each stream after completing the previous processing.
The result of processing of a frame is detection. It contains confidence of the detection, coordinates of the object within the frame, object's class (e.g., face, vehicle). After detection, the system recognizes the object: for example, by calculating a digital fingerprint for a face or extracting the text of a license plate. Those attributes help identify whether a sequence of detections corresponds to the same person or the same vehicle.
From those sequences episodes are formed. An episode characterizes the continuous presence of a single object in the video.
It has a start timestamp (when this object first appeared) and end timestamp (when this object leaves the scene).
An episode is called open
if the object is still in the scene.
A list of episodes is available via episodes_list
API operation. It supports long polling when the
corresponding parameter in the query is set. Clients connected using long polling receive each update for open episodes.
Whenever the episode's attributes are updated, server adjusts the updated_at
field of the episode.
This can be used to request recently updated episodes only by specifying updated_at_gt=<sometimestamp>
query parameter for the episodes_list
operation
Flussonic Vision Identification Server
API for more information regarding face identification and personsAn episode characterizes the continuous presence of a single object in the video and is described by the following attributes:
The episode's lifecycle is defined by three states: "opened", "started", "closed".
Streams information.
This API provides readonly list of streams being processed by the server.
Each record in the list includes some useful metrics about stream's health and videoanalytics status.
Configuration of streams, their input URLs and what videoanalytics should be used on each of them is set
by providing external configuration backend's URL to the service.
This URL should be provided in the service's configuration file as CONFIG_EXTERNAL
parameter.
See Vision Configuration Backend API
for more information.
This API method allows to get information about streams being processed by the server.
{- "estimated_count": 5,
- "next": "JTI0cG9zaXRpb25fZ3Q9MA==",
- "prev": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl",
- "timing": { },
- "server_id": "820efca4-4a15-4ab7-82fc-9e76f6d61325",
- "streams": [
- {
- "name": "string",
- "vision": {
- "alg": "faces",
- "areas": "string",
- "detectors": [
- {
- "detector_type": "faces",
- "region_id": "string",
- "region_coordinates": [
- {
- "x": 1,
- "y": 1
}, - {
- "x": 1,
- "y": 1
}, - {
- "x": 1,
- "y": 1
}
]
}
], - "stats": {
- "last_detection_at": 1643789953
}
}, - "stats": {
- "status": "running"
}
}
]
}
Returns the current configuration of the analytics modules at your server.
{- "listeners": {
- "http": [
- {
- "port": 80,
- "address": "10.0.35.1",
- "api": true
}
], - "https": [
- {
- "port": 80,
- "address": "10.0.35.1",
- "api": true,
- "ssl_protocols": [
- "tlsv1.1",
- "tlsv1.2"
]
}
]
}, - "api_key": "secret",
- "devices": [
- {
- "hw": "jetson",
- "device_id": 0
}
], - "loglevel": "info",
- "stats": {
- "server_version": "21.12",
- "build": 235,
- "schema_version": "5e5e91d8",
- "now": 1639337825000,
- "started_at": 1639337825,
- "devices": [
- {
- "hw": "jetson",
- "device_id": 0,
- "device_title": "string",
- "stats": {
- "ram_total_bytes": 34359738368,
- "ram_used_bytes": 22548578304,
- "utilization_percent": 87
}
}
], - "available_modules": { }
}
}
Analyzes the supplied image. Detects objects and computes digital fingerprints of the detected objects (if fingerprints are supported for the object type).
Image
{- "episodes": [
- {
- "episode_id": 0,
- "media": "string",
- "close_reason": "timeout",
- "opened_at": 1000000000000,
- "started_at": 1000000000000,
- "updated_at": 1000000000000,
- "closed_at": 1000000000000,
- "preview_timestamp": 1000000000000,
- "preview": "string",
- "episode_appearance_timestamps": {
- "inference_timestamp": 1637094994000
}, - "episode_type": "generic"
}
]
}
Your client application shall make this long poll request to Inference node(s). The request returns the list of episodes registered during operation on your server.
How to make long poll:
poll_timeout
and updated_at_gt
in the query.updated_at_gt
filter will be returned instantly (as in a regular request).poll_timeout
seconds.updated_at_gt
filter,
it is returned immediately.The returned list includes all episodes, i.e. license plate recognition (LPR), face detection, etc.
LPR results can be used right away while
face episodes shall be passed to the Identification service via episodes_identify
in order to match face fingerprints against the face database.
There may be several Inference nodes and one Identification service to ensure that the same face is recognized on all your cameras. Request all your Inference nodes before supplying the result to Identification.
{- "estimated_count": 5,
- "next": "JTI0cG9zaXRpb25fZ3Q9MA==",
- "prev": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl",
- "timing": { },
- "episodes": [
- {
- "episode_id": 0,
- "media": "string",
- "close_reason": "timeout",
- "opened_at": 1000000000000,
- "started_at": 1000000000000,
- "updated_at": 1000000000000,
- "closed_at": 1000000000000,
- "preview_timestamp": 1000000000000,
- "preview": "string",
- "episode_appearance_timestamps": {
- "inference_timestamp": 1637094994000
}, - "episode_type": "generic"
}
]
}
Provides information about running instance such as version, available hardware and utilization
{- "server_version": "21.12",
- "build": 235,
- "schema_version": "5e5e91d8",
- "now": 1639337825000,
- "started_at": 1639337825,
- "devices": [
- {
- "hw": "jetson",
- "device_id": 0,
- "device_title": "string",
- "stats": {
- "ram_total_bytes": 34359738368,
- "ram_used_bytes": 22548578304,
- "utilization_percent": 87
}
}
]
}
Provides endpoint for Prometheus scraper. Each record represents per-stream metrics.
Additionally there is a bunch of per-worker records containing aggregation of metrics of streams served by this worker.
Per-worker metrics are marked with media=all
attribute.
JSON representation of metrics is not implemented.
Its schema can be used for getting the list of metrics with descriptions for reference
{- "input_frames_count": 0,
- "decoded_frames_count": 0,
- "decoder_restarts_count": 0,
- "decoder_errors_count": 0,
- "processed_frames_count": 0,
- "processing_errors_count": 0,
- "decoding_time": 0,
- "frame_preprocessing_time": 0,
- "processing_wait_time": 0,
- "processing_time": 0,
- "detections_count": 0,
- "detections_accepted_count": 0,
- "detections_confidence_rejected_count": 0,
- "detections_orientation_rejected_count": 0,
- "detections_area_rejected_count": 0,
- "detections_confidence": 0,
- "detections_accepted_confidence": 0,
- "detections_area": 0,
- "detections_accepted_area": 0,
- "fingerprints_count": 0,
- "fingerprints_accepted_count": 0,
- "fingerprints_confidence": 0,
- "fingerprints_accepted_confidence": 0,
- "device_transfer_time": 0,
- "device_transferred_bytes": 0,
- "detection_time": 0,
- "fingerprinting_time": 0,
- "host_transfer_time": 0,
- "host_transferred_bytes": 0,
- "fingerprint_serialization_time": 0,
- "episodes_created_count": 0,
- "episodes_updated_count": 0,
- "episodes_detections_count": 0,
- "episodes_duration": 0,
- "rejected_episodes_count": 0,
- "rejected_episodes_detections_count": 0,
- "rejected_episodes_duration": 0,
- "episode_creation_distance": 0,
- "episode_update_distance": 0,
- "episodes_forming_time": 0,
- "episodes_preview_encoding_time": 0,
- "episode_creation_latency": 0
}
Counters provides historical data about changing the number of objects of interest in form of records
.
Each record represents statistical information such as
how many new visitors arrived into the area within a timeframe,
what was the maximum and minimum number of visitors in the area, etc.
The returned list includes records of counters of all types, i.e. humans, vehicles, etc. Each record represents aggregated metrics within some timeframe, a minute for instance. Specify a set of collection filters (such as media, region, period of time) to pick records of interest.
{- "estimated_count": 5,
- "next": "JTI0cG9zaXRpb25fZ3Q9MA==",
- "prev": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl",
- "timing": { },
- "records": [
- {
- "media": "string",
- "opened_at": 1000000000000,
- "duration": 0,
- "counter_type": "region",
- "region_id": "string",
- "humans": {
- "entries": 0,
- "occupancy_min": 0,
- "occupancy_average": 0,
- "occupancy_max": 0
}, - "vehicles": {
- "entries": 0,
- "occupancy_min": 0,
- "occupancy_average": 0,
- "occupancy_max": 0
}
}
]
}