TensorRT is a high-performance deep learning inference optimizer and runtime library developed by NVIDIA. It’s specifically designed for production environments and optimized for NVIDIA GPUs. The primary goal of TensorRT is to accelerate deep learning inference, which is the process of using a trained neural network model to make predictions based on new data.
Continue reading How To Deploy And Serve Computer Vision With TensorRT EfficientlyCategory: Uncategorized
Why Choose Savant Instead Of DeepStream For High-Performance Computer Vision
NVIDIA DeepStream SDK is a game-changer technology for deep neural network inference in computer vision served with NVIDIA hardware. The optimized architecture accounts for the specifics of NVIDIA accelerators and edge devices, making pipelines work blazingly fast. The core of the technology is TensorRT, which consists of two major parts: the model optimizer, transforming the model into an “engine” heavily optimized for particular hardware, and the inference library, allowing for rapidly fast inference.
Another DeepStream’s killer feature is connected with the CUDA data processing model: computations are carried on with SIMD operations over the data in a separate GPU memory. The advantage is that the GPU memory is heavily optimized for such operations, but you need to pay for it by uploading the data to the GPU and downloading the results. It can be a costly process involving delays, PCI-E bus saturation, and CPU and GPU idling. In the ideal situation, you upload a moderate amount of data to the GPU, handle it intensively, and download a moderate amount of results from the GPU at the end of the processing. DeepStream is optimized and provides developers with tools for implementing such processing efficiently.
So why do developers hesitate to use DeepStream in their computer vision endeavors? There are reasons for that we will discuss in further sections, and find out how to overcome them.
Continue reading Why Choose Savant Instead Of DeepStream For High-Performance Computer Vision🚀 0.2.6 Release Notes
Savant 0.2.6 was released on November 8, 2023. The release includes multiple bug fixes, seven new demos, and many other enhancements, including documentation, benchmarking, and Jetson Orin Nano support.
Savant crossed the 300-star band on GitHub, and Discord is now active. The work on the release took 1.5 months. In the following sections, we will cover essential parts of the release in detail.
GitHub: https://github.com/insight-platform/Savant/releases/tag/v0.2.6
Continue reading 🚀 0.2.6 Release NotesNew Feature: GPU-less Always-On RTSP Run Mode
Specific Nvidia platforms do not support the Nvidia encoder (NVENC) device. Datacenter accelerators like V100, A30, A100, and H100 cannot encode video streams. The situation also transpires to the edge: Nvidia Jetson Orin Nano does not include NVENC either.
Continue reading New Feature: GPU-less Always-On RTSP Run ModeHow to Implement High-Performance Keypoint Detection
When estimating the object’s pose, you usually deal with keypoint detection. It can met in various situations like facial recognition to align the face based on pose estimation or action recognition to estimate the actions and analyze them.
Continue reading How to Implement High-Performance Keypoint DetectionHow To Count People In Polygonal Areas with Savant
In an era where data-driven decisions are paramount, the ability to accurately count and monitor people in specific areas has become invaluable. Nvidia’s PeopleNet neural network offers a solution for this. Aside from PeopleNet, other models can be used like the representatives of the YOLO family. As a component of Nvidia’s DeepStream SDK, which specializes in AI-driven video analytics, PeopleNet boasts training on extensive datasets, ensuring high precision in detecting individuals even in intricate or densely populated scenes.
Continue reading How To Count People In Polygonal Areas with SavantFacial ReID Demo Update: Index Builder Now Uses ClientSDK
Savant ClientSDK is a toolkit that enables the building of simple bridges between applications and Savant. With a simple Pythonic interface, ClientSDK helps developers ingest images into the pipeline and retrieve computer vision results from it. The SDK is available in both synchronous and asynchronous modes and tightly integrated with OpenTelemetry and logging subsystems allowing developers to associate ingested data with tracing information and logged messages.
In the project template, ClientSDK demonstrates how to use it; the Kafka/Redis source and sink adapters are also implemented with ClientSDK classes.
Client SDK enables not only sending frames with metadata but also supports control flow messages like EOS
or Shutdown
. Shutdown capability makes ClientSDK ideal in situations when the pipeline needs to be finalized after data processing. One such case represents the ReID index builder used in the Facial ReID demo. It feeds the pipeline with images to build the database. When all images are sent, it needs to cause the completion of the pipeline; and it does it by sending the Shutdown
message.
The code is pretty straightforward and easy to understand. Before ClientSDK, we used the file source adapter and there was no good way to shut down the compose bundle after the index was ready. Now, it can be done in a simple and standardized way.
How To Do Car License Plate Recognition With YOLOv8 And LPD/LPR Models From Nvidia NGC
In the realm of computer vision, the ability to recognize and read license plates from vehicles is a significant advancement. This technology has numerous applications, from traffic management to security and surveillance. Three prominent models that have made this possible are YOLOv8 and the License Plate Detection/Recognition (LPD/LPR) models from Nvidia NGC.
Continue reading How To Do Car License Plate Recognition With YOLOv8 And LPD/LPR Models From Nvidia NGCNew Feature: Video Pass-Through
Media Pass-Through allows copying the incoming video stream to output without transcoding. The use of the feature is beneficial if the pipeline does not use draw functionality or it occupies a separate pipeline in a chain.
Continue reading New Feature: Video Pass-ThroughTen Reasons To Consider Savant For Your Computer Vision Project
This article answers the question of why you may find it beneficial to use Savant instead of DeepStream, OpenVino, PyTorch, or OpenCV in your next computer vision project. It is not an easy question, because computer vision is a tough field with many caveats and difficulties. You start with finding a way to make certain things doable from the quality point of view, but later you also need to serve the solution commercially efficiently, processing data in real-time rather than pathetic 2 FPS on hardware worth like a Boeing wing.
Continue reading Ten Reasons To Consider Savant For Your Computer Vision Project