We finally merged two new adapters in the Savant Framework, which helps integrate Amazon Kinesis Video Streams (KVS) with Savant. KVS is a great technology that combines video-optimized streaming and video storage.
Continue reading A New Integration Opens The Way In The Cloud: Meet Amazon KVS AdaptersWorking With Dynamic Video Sources in Savant
It is difficult to work efficiently with multiple video sources in a computer vision pipeline. The system must handle their connection, disconnection, and outages, negotiate codecs, and spin corresponding decoder and encoder pipelines based on known hardware features and limitations.
That is why Savant promotes plug-and-play technology for video streams, which takes care of the nuances related to source management, automatic dead source eviction, codec negotiation, etc. Developers do not care about how the framework implements that – just attach and detach new sources on demand.
The article demonstrates how to dynamically attach and detach sources to a pipeline with plain Docker.
To fully understand how Savant works with adapters, please first read the article related to the Savant protocol in the documentation. In this article, we will show how to connect and disconnect a source from a running module without diving into how it works internally.
In our samples, we use the “docker-compose” to simplify the execution and help users quickly launch the code. However, this often causes misunderstandings among those who do not understand Docker machinery well. So, let us begin by “decomposing” a sample. Savant supports Unix domain sockets and TCP/IP sockets for component communication, and we will try both.
Continue reading Working With Dynamic Video Sources in SavantSavant 0.3.11
We are happy to announce a new release of Savant – 0.3.11. This release is 100% API compatible with 0.2.11 but is built on DeepStream 6.4 and JetPack 6.0 DP. Thus, it does not support the Jetson Xavier family. Both 0.3.x and 0.2.x branches are maintained under long-term support conditions. They do not receive new features, only bug fixes related to stable operation.
The upcoming 0.4.x branch, which will succeed 0.3.x, is developing all new functionality. The roadmap for 0.4.0 is here. The release focuses on better interaction with data sources and sinks.
How To Work With MJPEG USB Camera in Savant
Many USB cameras offer MJPEG as a major format for video streams. MJPEG is a lossy compressed format that combines low latency and optimized compression, allowing USB cameras to support high-resolution, high-FPS streaming. It is also a very popular solution for synchronized stereo cameras, as depicted in the lead image.
Continue reading How To Work With MJPEG USB Camera in SavantEmulating USB Camera In Linux With FFmpeg and V4L2 Loopback
This short article discusses the MJPEG USB camera simulation in Linux with FFmpeg and a V4L2 loopback device. USB and CSI cameras, alongside GigE Vision cameras, are the main visual data source in robotics and other industrial applications.
They have significant advantages over RTSP cameras:
- low latency;
- simple bulletproof stack;
- zero-configuration;
- cross-camera video synchronization;
- high-FPS options;
- high-bandwidth interface.
Serving YOLO V8: Savant is More Than Three Times Faster Than PyTorch
Everybody loves benchmarking, and we love it, too! We always claim that Savant is fast and highly optimized for Nvidia hardware because it uses TensorRT inference under the hood. However, without numbers and benchmarks, the declaration may sound unfounded. Thus, we decided to publish a benchmark demonstrating the inference performance for three technologies:
- PyTorch on CUDA + video processing with OpenCV;
- PyTorch on CUDA + hardware accelerated (NVDEC) video processing with Torchaudio (weirdly, video processing primitives lie in the Torchaudio library);
- Savant.
The 1st is what most developers usually use as a de-facto approach. The 2nd is used rarely because it requires a custom build, and developers often underestimate hardware-accelerated video decoding/encoding as the critical enabling factor for CUDA-based processing.
Continue reading Serving YOLO V8: Savant is More Than Three Times Faster Than PyTorch🚀 0.2.7 Release Notes
Savant 0.2.7 was released on February 7, 2024. The release includes several bug fixes, four new demos, and other enhancements, including documentation and benchmarking.
Savant crossed the 400-star band on GitHub, and Discord is now the place must-have-to-join. The work on the release took three months. In the following sections, we will cover essential parts of the release in detail.
IMPORTANT: Savant 0.2.7 is the last feature release in the 0.2.X branch. The following releases in the 0.2.X branch will be maintenance and bugfix releases. The feature development switches to the 0.3.X branch based on DeepStream 6.4 and WILL NOT support the Jetson Xavier family because Nvidia does not support them with DS 6.4.
GitHub: https://github.com/insight-platform/Savant/releases/tag/v0.2.7
Continue reading 🚀 0.2.7 Release NotesServing PyTorch Models Efficiently and Asynchronously With Savant
Savant gives developers a highly efficient inference based on TensorRT, which you usually must use when developing efficient pipelines. However, because of the particular need, you may need to integrate the Savant pipeline with another inference technology. In the article, we show how Savant integrates with GPU-accelerated PyTorch inference.
You can also use the approach if you are PyTorch-centric and happy with it but need efficient infrastructure for video processing: transfer, decoding, and encoding.
Continue reading Serving PyTorch Models Efficiently and Asynchronously With SavantA New Feature: Accelerated Model Output Post-Processing With CuPy
When deep neural networks are evaluated with the CUDA runtime, the input of the model and its output are allocated in the GPU memory. The next step is to extract high-level data like bounding boxes, attributes, or masks from raw GPU-allocated tensors.
Continue reading A New Feature: Accelerated Model Output Post-Processing With CuPyRunning The RT-DETR Detection Model Efficiently With Savant
Transformer models become gradually more popular in computer vision. Even a couple of years ago, nobody broadly used transformers for computer vision. However, transformers have significantly changed the landscape of sophisticated deep learning solutions, primarily in natural language processing and generative AI.
Continue reading Running The RT-DETR Detection Model Efficiently With Savant