Savant 0.4.1 continues 0.4.x release cycle, introducing several new features, multiple bug fixes, and sample updates. It is built on DeepStream 6.4 / JetPack 6.0 and widely tested on Jetson Orin Nano, Orin NX, Turing, Ampere, and Ada Lovelace discrete GPUs. In this release, we focused on testing problems that our customers and community users discovered in 0.4.0. Also, we developed an auxiliary watchdog service for pipeline health monitoring.
Continue reading Savant 0.4.1 is Out. Spotlight: Advanced Video Processing FeaturesCategory: Uncategorized
Dump and Replay Video Traffic with Buffer Adapter
In video streaming applications, reproducing the results is crucial for quality estimation, troubleshooting, and code improvement. Unfortunately, it is not easy because data is streaming. Thus, developers need utilities that allow them to record traffic and replay it at the same rate and timing to simulate the real sources. Sometimes, developers can use video files instead of live sources like RTSP or USB cameras.
Continue reading Dump and Replay Video Traffic with Buffer AdapterAuxiliary Video Stream Support In Savant
Savant recently made a massive step towards hardware-accelerated video transcoding and composition. Previous versions did not allow users to produce auxiliary or custom video streams inside modules, which made it difficult to create customized dashboards, change video resolution, or build video compositions, like 2×2 video walls composing four streams in a single image.
Continue reading Auxiliary Video Stream Support In SavantSavant 0.4.0: Advanced Adapters, Developer Tools, Video Archiving and Re-Streaming
Savant 0.4.0 is out. This release focuses on system usability, interfacing, advanced computer vision, and video analytics. It is based on the state-of-the-art DeepStream 6.4. Savant moves towards the frontier of creating omnipresent computer vision and video analytical systems working in hybrid mode on the edge and in the data centers.
Unlike commonly used computer vision frameworks like PyTorch, TensorFlow, OpenVINO/DlStreamer, and DeepStream, Savant offers its users not only inference and image manipulation tools but also advanced architecture for building distributed edge/datacenter computer vision applications communicating over the network. Thus, Savant users focus on computer vision but do not reinvent the wheel, productizing their pipelines. So, what is new in Savant v0.4.0?
Continue reading Savant 0.4.0: Advanced Adapters, Developer Tools, Video Archiving and Re-StreamingA New Integration Opens The Way In The Cloud: Meet Amazon KVS Adapters
We finally merged two new adapters in the Savant Framework, which helps integrate Amazon Kinesis Video Streams (KVS) with Savant. KVS is a great technology that combines video-optimized streaming and video storage.
Continue reading A New Integration Opens The Way In The Cloud: Meet Amazon KVS AdaptersWorking With Dynamic Video Sources in Savant
It is difficult to work efficiently with multiple video sources in a computer vision pipeline. The system must handle their connection, disconnection, and outages, negotiate codecs, and spin corresponding decoder and encoder pipelines based on known hardware features and limitations.
That is why Savant promotes plug-and-play technology for video streams, which takes care of the nuances related to source management, automatic dead source eviction, codec negotiation, etc. Developers do not care about how the framework implements that – just attach and detach new sources on demand.
The article demonstrates how to dynamically attach and detach sources to a pipeline with plain Docker.
To fully understand how Savant works with adapters, please first read the article related to the Savant protocol in the documentation. In this article, we will show how to connect and disconnect a source from a running module without diving into how it works internally.
In our samples, we use the “docker-compose” to simplify the execution and help users quickly launch the code. However, this often causes misunderstandings among those who do not understand Docker machinery well. So, let us begin by “decomposing” a sample. Savant supports Unix domain sockets and TCP/IP sockets for component communication, and we will try both.
Continue reading Working With Dynamic Video Sources in SavantSavant 0.3.11
We are happy to announce a new release of Savant – 0.3.11. This release is 100% API compatible with 0.2.11 but is built on DeepStream 6.4 and JetPack 6.0 DP. Thus, it does not support the Jetson Xavier family. Both 0.3.x and 0.2.x branches are maintained under long-term support conditions. They do not receive new features, only bug fixes related to stable operation.
The upcoming 0.4.x branch, which will succeed 0.3.x, is developing all new functionality. The roadmap for 0.4.0 is here. The release focuses on better interaction with data sources and sinks.
How To Work With MJPEG USB Camera in Savant
Many USB cameras offer MJPEG as a major format for video streams. MJPEG is a lossy compressed format that combines low latency and optimized compression, allowing USB cameras to support high-resolution, high-FPS streaming. It is also a very popular solution for synchronized stereo cameras, as depicted in the lead image.
Continue reading How To Work With MJPEG USB Camera in SavantEmulating USB Camera In Linux With FFmpeg and V4L2 Loopback
This short article discusses the MJPEG USB camera simulation in Linux with FFmpeg and a V4L2 loopback device. USB and CSI cameras, alongside GigE Vision cameras, are the main visual data source in robotics and other industrial applications.
They have significant advantages over RTSP cameras:
- low latency;
- simple bulletproof stack;
- zero-configuration;
- cross-camera video synchronization;
- high-FPS options;
- high-bandwidth interface.
Serving YOLO V8: Savant is More Than Three Times Faster Than PyTorch
Everybody loves benchmarking, and we love it, too! We always claim that Savant is fast and highly optimized for Nvidia hardware because it uses TensorRT inference under the hood. However, without numbers and benchmarks, the declaration may sound unfounded. Thus, we decided to publish a benchmark demonstrating the inference performance for three technologies:
- PyTorch on CUDA + video processing with OpenCV;
- PyTorch on CUDA + hardware accelerated (NVDEC) video processing with Torchaudio (weirdly, video processing primitives lie in the Torchaudio library);
- Savant.
The 1st is what most developers usually use as a de-facto approach. The 2nd is used rarely because it requires a custom build, and developers often underestimate hardware-accelerated video decoding/encoding as the critical enabling factor for CUDA-based processing.
Continue reading Serving YOLO V8: Savant is More Than Three Times Faster Than PyTorch