Auxiliary Video Stream Support In Savant

Savant recently made a massive step towards hardware-accelerated video transcoding and composition. Previous versions did not allow users to produce auxiliary or custom video streams inside modules, which made it difficult to create customized dashboards, change video resolution, or build video compositions, like 2×2 video walls composing four streams in a single image.

Compatibility Note: the minimal supported Savant version is 0.4.1.

Yes, previously, users could use OpenCV and other custom approaches to generate auxiliary streams, but they were not integrated into the Savant downstream communication protocol, were not delivered to downstream pipelines, and did not contain metadata.

As for now, users can trivially create auxiliary video streams containing both video and metadata right from pyfunc elements with a very simple syntax:

# create it

aux_stream = self.auxiliary_stream(
  source_id=aux_source_id,
  width=resolution.width,
  height=resolution.height,
  codec_params=self.codec_params,
)

# use it

aux_frame, aux_buffer = aux_stream.create_frame(
  pts=frame_meta.pts,
  duration=frame_meta.duration,
)

with nvds_to_gpu_mat(aux_buffer, batch_id=0) as aux_mat:
   # Resize the original frame
   cv2.cuda.resize(
     src=frame_mat,
     dst=aux_mat,
     dsize=(resolution.width, resolution.height),
     stream=stream,
   )

These streams can be created and destroyed on demand. They are built on the well-known OpenCV GpuMat interface, allowing efficient content composition from source GPU-allocated frames without excessive GPU-CPU transfers. You can also fill their metadata with all required values using the VideoFrame API.

To help people using the feature, we provide a sample showing video transcoding from the original resolution to 720p, 480p, and 360p.

Do not hesitate to join Discord to get more Savant-related information and help.