Flussonic Vision API (24.11-261)

Download OpenAPI specification:Download

Support team: support@flussonic.com

Flussonic Vision Inference Server API.

Inference server detects and tracks objects of interest (people, faces, vehicles) on video and forms episodes. See Flussonic Vision Identification Server API for more information regarding face identification and persons.

Inference server obtains the list of streams to be processed from the external configuration backend API. In most cases this external configuration is provided by Flussonic Central which helds the entire cluster configuration.

Inference server captures and decodes configured streams to get raw, uncompressed frames. Because processing is computationally expensive, it cannot handle every frame from every configured stream. Instead, the Inference server processes the most recent frame from each stream after completing the previous processing.

The result of processing of a frame is detection. It contains confidence of the detection, coordinates of the object within the frame, object's class (e.g., face, vehicle). After detection, the system recognizes the object: for example, by calculating a digital fingerprint for a face or extracting the text of a license plate. Those attributes help identify whether a sequence of detections corresponds to the same person or the same vehicle.

From those sequences episodes are formed. An episode characterizes the continuous presence of a single object in the video. It has a start timestamp (when this object first appeared) and end timestamp (when this object leaves the scene). An episode is called open if the object is still in the scene.

A list of episodes is available via episodes_list API operation. It supports long polling when the corresponding parameter in the query is set. Clients connected using long polling receive each update for open episodes.

Whenever the episode's attributes are updated, server adjusts the updated_at field of the episode. This can be used to request recently updated episodes only by specifying updated_at_gt=<sometimestamp> query parameter for the episodes_list operation

Key Terms

  • Detection: finds objects in the video frame, determines their coordinates and object type (e.g., face, license plate, person).
  • License plate recognition recognizes the characters of the license plate.
  • Face Recognition: generates digital representation of the face, known as digital fingerprints.
  • Episode represents the continuous tracking of a single object (like a person or a vehicle) in the video.
  • Face identification matches the recognized face with known persons.
    See Flussonic Vision Identification Server API for more information regarding face identification and persons

Episode

An episode characterizes the continuous presence of a single object in the video and is described by the following attributes:

  • start time - when the object first appeared
  • end time - when the object disappeared
  • detection data - the object class, confidence level, keypoints, coordinates in the frame, etc.
  • recognition data - a string containing characters of the detected license plate or text representation of digital fingerprints for faces
  • identification data - identifiers of persons matched with the recognized face in the episode.

Episode's Lifecycle

The episode's lifecycle is defined by three states: "opened", "started", "closed".

  • opened - a new object has just appeared in the frame, but there's not yet enough confidence to rule out a false detection
  • started - sufficient confidence in the object's detection (typically, confident detection and recognition over a sequence of frames)
  • closed - the object is no longer in the frame, and its continuous presence tracking has ended

stream

Streams information.
This API provides readonly list of streams being processed by the server.
Each record in the list includes some useful metrics about stream's health and videoanalytics status.

Configuration of streams, their input URLs and what videoanalytics should be used on each of them is set by providing external configuration backend's URL to the service. This URL should be provided in the service's configuration file as CONFIG_EXTERNAL parameter.
See Vision Configuration Backend API for more information.

List streams

This API method allows to get information about streams being processed by the server.

Authorizations:
basicAuthbearerAuth

Responses

Response samples

Content type
application/json
{
  • "estimated_count": 5,
  • "next": "JTI0cG9zaXRpb25fZ3Q9MA==",
  • "prev": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl",
  • "timing": { },
  • "server_id": "820efca4-4a15-4ab7-82fc-9e76f6d61325",
  • "streams": [
    ]
}

config

Service configuration

Get configuration

Returns the current configuration of the analytics modules at your server.

Authorizations:
basicAuthbearerAuth

Responses

Response samples

Content type
application/json
{
  • "listeners": {
    },
  • "api_key": "secret",
  • "config_external": {},
  • "devices": [
    ],
  • "loglevel": "info",
  • "stats": {
    }
}

process

Image analytics

Image analysis

Analyzes the supplied image. Detects objects and computes digital fingerprints of the detected objects (if fingerprints are supported for the object type).

Authorizations:
basicAuthbearerAuth
Request Body schema: image/jpeg
string <binary>

Image

Responses

Response samples

Content type
application/json
{
  • "episodes": [
    ]
}

episodes

Episodes

Episodes list

Your client application shall make this long poll request to Inference node(s). The request returns the list of episodes registered during operation on your server.

How to make long poll:

  • Set poll_timeout and updated_at_gt in the query.
  • The episodes that match the updated_at_gt filter will be returned instantly (as in a regular request).
  • If there are no such episodes, the connection will last poll_timeout seconds.
  • If during this connection interval an episode appears that matches the updated_at_gt filter, it is returned immediately.

The returned list includes all episodes, i.e. license plate recognition (LPR), face detection, etc. LPR results can be used right away while face episodes shall be passed to the Identification service via episodes_identify in order to match face fingerprints against the face database.

There may be several Inference nodes and one Identification service to ensure that the same face is recognized on all your cameras. Request all your Inference nodes before supplying the result to Identification.

Authorizations:
basicAuthbearerAuth
query Parameters
media
string
Example: media=cam-045

Filter the collection by stream name on which the episode is registered

q
string
Example: q=yellow car

Request for the search across the streams being processed by the inference server. Query is a free-formed text that describes an object to search for and its attributes. Query may include color properties ("yellow"), appearance attributes ("beard"). Results may be inaccurate and should be reviewed by inspecting corresponding video fragments. In order to use "context search" feature package vision-context-search must be installed at the inference server to make digital fingerprints of streams. In case you use the "q" parameter to get a list of episodes, the following collection filters are only supported: "media", "opened_at_gte", "opened_at_lte".

Face"face" (string) or Vehicle"vehicle" (string)

Request specific episode type

updated_at
integer <utc_ms> [ 1000000000000 .. 10000000000000 ]

Filter results by timstamp when episode was updated.
To specify timestamp range these suffixes may be used:
_gt: greater than value
_lt: less than value
_gte: greater than or equal to value
_lte: less than or equal to value

poll_timeout
integer <seconds>
Example: poll_timeout=30

Client may ask to delay a response if there are no episodes to fetch. This should be used as a long-poll mechanism for lightweight fetching episodes from origin.

Responses

Response samples

Content type
application/json
{
  • "estimated_count": 5,
  • "next": "JTI0cG9zaXRpb25fZ3Q9MA==",
  • "prev": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl",
  • "timing": { },
  • "episodes": [
    ]
}

monitoring

Server metrics and status

Server info and runtime stats

Provides information about running instance such as version, available hardware and utilization

Authorizations:
basicAuthbearerAuth

Responses

Response samples

Content type
{
  • "server_version": "21.12",
  • "build": 235,
  • "schema_version": "5e5e91d8",
  • "now": 1639337825000,
  • "started_at": 1639337825,
  • "devices": [
    ]
}

Prometheus metrics

Provides endpoint for Prometheus scraper. Each record represents per-stream metrics.
Additionally there is a bunch of per-worker records containing aggregation of metrics of streams served by this worker.
Per-worker metrics are marked with media=all attribute.

JSON representation of metrics is not implemented.
Its schema can be used for getting the list of metrics with descriptions for reference

Authorizations:
basicAuthbearerAuth

Responses

Response samples

Content type
{
  • "input_frames_count": 0,
  • "decoded_frames_count": 0,
  • "decoder_restarts_count": 0,
  • "decoder_errors_count": 0,
  • "processed_frames_count": 0,
  • "processing_errors_count": 0,
  • "decoding_time": 0,
  • "frame_preprocessing_time": 0,
  • "processing_wait_time": 0,
  • "processing_time": 0,
  • "detections_count": 0,
  • "detections_accepted_count": 0,
  • "detections_confidence_rejected_count": 0,
  • "detections_orientation_rejected_count": 0,
  • "detections_area_rejected_count": 0,
  • "detections_confidence": 0,
  • "detections_accepted_confidence": 0,
  • "detections_area": 0,
  • "detections_accepted_area": 0,
  • "fingerprints_count": 0,
  • "fingerprints_accepted_count": 0,
  • "fingerprints_confidence": 0,
  • "fingerprints_accepted_confidence": 0,
  • "device_transfer_time": 0,
  • "device_transferred_bytes": 0,
  • "detection_time": 0,
  • "fingerprinting_time": 0,
  • "host_transfer_time": 0,
  • "host_transferred_bytes": 0,
  • "fingerprint_serialization_time": 0,
  • "episodes_created_count": 0,
  • "episodes_updated_count": 0,
  • "episodes_detections_count": 0,
  • "episodes_duration": 0,
  • "rejected_episodes_count": 0,
  • "rejected_episodes_detections_count": 0,
  • "rejected_episodes_duration": 0,
  • "episode_creation_distance": 0,
  • "episode_update_distance": 0,
  • "episodes_forming_time": 0,
  • "episodes_preview_encoding_time": 0,
  • "episode_creation_latency": 0
}

Liveness probe

Liveness probe.

Authorizations:
basicAuthbearerAuth

Responses

Response samples

Content type
application/json
{
  • "server_version": "21.12",
  • "build": 235,
  • "schema_version": "5e5e91d8",
  • "now": 1639337825000,
  • "started_at": 1639337825
}

counters

Counters provides historical data about changing the number of objects of interest in form of records.
Each record represents statistical information such as how many new visitors arrived into the area within a timeframe, what was the maximum and minimum number of visitors in the area, etc.

Counter records list

The returned list includes records of counters of all types, i.e. humans, vehicles, etc. Each record represents aggregated metrics within some timeframe, a minute for instance. Specify a set of collection filters (such as media, region, period of time) to pick records of interest.

Authorizations:
basicAuthbearerAuth
query Parameters
media
string
Example: media=cam-045

Filter results by stream name on which the counter is acting.

opened_at
integer <utc_ms> [ 1000000000000 .. 10000000000000 ]

Filter results by timstamp when record was created.

Responses

Response samples

Content type
application/json
{
  • "estimated_count": 5,
  • "next": "JTI0cG9zaXRpb25fZ3Q9MA==",
  • "prev": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl",
  • "timing": { },
  • "records": [
    ]
}