Auxiliary Video Stream Support In Savant

Savant recently made a massive step towards hardware-accelerated video transcoding and composition. Previous versions did not allow users to produce auxiliary or custom video streams inside modules, which made it difficult to create customized dashboards, change video resolution, or build video compositions, like 2×2 video walls composing four streams in a single image.

Continue reading Auxiliary Video Stream Support In Savant

Savant 0.4.0: Advanced Adapters, Developer Tools, Video Archiving and Re-Streaming

Savant 0.4.0 is out. This release focuses on system usability, interfacing, advanced computer vision, and video analytics. It is based on the state-of-the-art DeepStream 6.4. Savant moves towards the frontier of creating omnipresent computer vision and video analytical systems working in hybrid mode on the edge and in the data centers.

Unlike commonly used computer vision frameworks like PyTorch, TensorFlow, OpenVINO/DlStreamer, and DeepStream, Savant offers its users not only inference and image manipulation tools but also advanced architecture for building distributed edge/datacenter computer vision applications communicating over the network. Thus, Savant users focus on computer vision but do not reinvent the wheel, productizing their pipelines. So, what is new in Savant v0.4.0?

Continue reading Savant 0.4.0: Advanced Adapters, Developer Tools, Video Archiving and Re-Streaming

Working With Dynamic Video Sources in Savant

It is difficult to work efficiently with multiple video sources in a computer vision pipeline. The system must handle their connection, disconnection, and outages, negotiate codecs, and spin corresponding decoder and encoder pipelines based on known hardware features and limitations.

That is why Savant promotes plug-and-play technology for video streams, which takes care of the nuances related to source management, automatic dead source eviction, codec negotiation, etc. Developers do not care about how the framework implements that – just attach and detach new sources on demand.

The article demonstrates how to dynamically attach and detach sources to a pipeline with plain Docker.

To fully understand how Savant works with adapters, please first read the article related to the Savant protocol in the documentation. In this article, we will show how to connect and disconnect a source from a running module without diving into how it works internally.

In our samples, we use the “docker-compose” to simplify the execution and help users quickly launch the code. However, this often causes misunderstandings among those who do not understand Docker machinery well. So, let us begin by “decomposing” a sample. Savant supports Unix domain sockets and TCP/IP sockets for component communication, and we will try both.

Continue reading Working With Dynamic Video Sources in Savant

Savant 0.3.11

We are happy to announce a new release of Savant – 0.3.11. This release is 100% API compatible with 0.2.11 but is built on DeepStream 6.4 and JetPack 6.0 DP. Thus, it does not support the Jetson Xavier family. Both 0.3.x and 0.2.x branches are maintained under long-term support conditions. They do not receive new features, only bug fixes related to stable operation.

The upcoming 0.4.x branch, which will succeed 0.3.x, is developing all new functionality. The roadmap for 0.4.0 is here. The release focuses on better interaction with data sources and sinks.

Emulating USB Camera In Linux With FFmpeg and V4L2 Loopback

This short article discusses the MJPEG USB camera simulation in Linux with FFmpeg and a V4L2 loopback device. USB and CSI cameras, alongside GigE Vision cameras, are the main visual data source in robotics and other industrial applications.

They have significant advantages over RTSP cameras:

  • low latency;
  • simple bulletproof stack;
  • zero-configuration;
  • cross-camera video synchronization;
  • high-FPS options;
  • high-bandwidth interface.
Continue reading Emulating USB Camera In Linux With FFmpeg and V4L2 Loopback

Serving YOLO V8: Savant is More Than Three Times Faster Than PyTorch

Everybody loves benchmarking, and we love it, too! We always claim that Savant is fast and highly optimized for Nvidia hardware because it uses TensorRT inference under the hood. However, without numbers and benchmarks, the declaration may sound unfounded. Thus, we decided to publish a benchmark demonstrating the inference performance for three technologies:

  1. PyTorch on CUDA + video processing with OpenCV;
  2. PyTorch on CUDA + hardware accelerated (NVDEC) video processing with Torchaudio (weirdly, video processing primitives lie in the Torchaudio library);
  3. Savant.

The 1st is what most developers usually use as a de-facto approach. The 2nd is used rarely because it requires a custom build, and developers often underestimate hardware-accelerated video decoding/encoding as the critical enabling factor for CUDA-based processing.

Continue reading Serving YOLO V8: Savant is More Than Three Times Faster Than PyTorch

🚀 0.2.7 Release Notes

Savant 0.2.7 was released on February 7, 2024. The release includes several bug fixes, four new demos, and other enhancements, including documentation and benchmarking.

Savant crossed the 400-star band on GitHub, and Discord is now the place must-have-to-join. The work on the release took three months. In the following sections, we will cover essential parts of the release in detail.

IMPORTANT: Savant 0.2.7 is the last feature release in the 0.2.X branch. The following releases in the 0.2.X branch will be maintenance and bugfix releases. The feature development switches to the 0.3.X branch based on DeepStream 6.4 and WILL NOT support the Jetson Xavier family because Nvidia does not support them with DS 6.4.


Continue reading 🚀 0.2.7 Release Notes

Serving PyTorch Models Efficiently and Asynchronously With Savant

Savant gives developers a highly efficient inference based on TensorRT, which you usually must use when developing efficient pipelines. However, because of the particular need, you may need to integrate the Savant pipeline with another inference technology. In the article, we show how Savant integrates with GPU-accelerated PyTorch inference.

You can also use the approach if you are PyTorch-centric and happy with it but need efficient infrastructure for video processing: transfer, decoding, and encoding.

Continue reading Serving PyTorch Models Efficiently and Asynchronously With Savant