Pipeline Observabilty With Prometheus And Grafana

Monitoring as a part of software observability is crucial for understanding the state and health of the system. Video analytical and computer vision pipelines also benefit from monitoring, allowing SRE engineers to understand and predict system operation and reason about problems based on anomalies and deviations.

Computer vision pipelines represent complex software working in a wild environment, requiring continuous observation to understand trends and correlations between internal and external factors.

Continue reading Pipeline Observabilty With Prometheus And Grafana

How To Deploy And Serve Computer Vision With TensorRT Efficiently

TensorRT is a high-performance deep learning inference optimizer and runtime library developed by NVIDIA. It’s specifically designed for production environments and optimized for NVIDIA GPUs. The primary goal of TensorRT is to accelerate deep learning inference, which is the process of using a trained neural network model to make predictions based on new data.

Continue reading How To Deploy And Serve Computer Vision With TensorRT Efficiently

Why Choose Savant Instead Of DeepStream For High-Performance Computer Vision

NVIDIA DeepStream SDK is a game-changer technology for deep neural network inference in computer vision served with NVIDIA hardware. The optimized architecture accounts for the specifics of NVIDIA accelerators and edge devices, making pipelines work blazingly fast. The core of the technology is TensorRT, which consists of two major parts: the model optimizer, transforming the model into an “engine” heavily optimized for particular hardware, and the inference library, allowing for rapidly fast inference.

Another DeepStream’s killer feature is connected with the CUDA data processing model: computations are carried on with SIMD operations over the data in a separate GPU memory. The advantage is that the GPU memory is heavily optimized for such operations, but you need to pay for it by uploading the data to the GPU and downloading the results. It can be a costly process involving delays, PCI-E bus saturation, and CPU and GPU idling. In the ideal situation, you upload a moderate amount of data to the GPU, handle it intensively, and download a moderate amount of results from the GPU at the end of the processing. DeepStream is optimized and provides developers with tools for implementing such processing efficiently.

So why do developers hesitate to use DeepStream in their computer vision endeavors? There are reasons for that we will discuss in further sections, and find out how to overcome them.

Continue reading Why Choose Savant Instead Of DeepStream For High-Performance Computer Vision

🚀 0.2.6 Release Notes

Savant 0.2.6 was released on November 8, 2023. The release includes multiple bug fixes, seven new demos, and many other enhancements, including documentation, benchmarking, and Jetson Orin Nano support.

Savant crossed the 300-star band on GitHub, and Discord is now active. The work on the release took 1.5 months. In the following sections, we will cover essential parts of the release in detail.

GitHub: https://github.com/insight-platform/Savant/releases/tag/v0.2.6

Continue reading 🚀 0.2.6 Release Notes

How To Count People In Polygonal Areas with Savant

In an era where data-driven decisions are paramount, the ability to accurately count and monitor people in specific areas has become invaluable. Nvidia’s PeopleNet neural network offers a solution for this. Aside from PeopleNet, other models can be used like the representatives of the YOLO family. As a component of Nvidia’s DeepStream SDK, which specializes in AI-driven video analytics, PeopleNet boasts training on extensive datasets, ensuring high precision in detecting individuals even in intricate or densely populated scenes.

Continue reading How To Count People In Polygonal Areas with Savant

Facial ReID Demo Update: Index Builder Now Uses ClientSDK

Savant ClientSDK is a toolkit that enables the building of simple bridges between applications and Savant. With a simple Pythonic interface, ClientSDK helps developers ingest images into the pipeline and retrieve computer vision results from it. The SDK is available in both synchronous and asynchronous modes and tightly integrated with OpenTelemetry and logging subsystems allowing developers to associate ingested data with tracing information and logged messages.

In the project template, ClientSDK demonstrates how to use it; the Kafka/Redis source and sink adapters are also implemented with ClientSDK classes.

Client SDK enables not only sending frames with metadata but also supports control flow messages like EOS or Shutdown. Shutdown capability makes ClientSDK ideal in situations when the pipeline needs to be finalized after data processing. One such case represents the ReID index builder used in the Facial ReID demo. It feeds the pipeline with images to build the database. When all images are sent, it needs to cause the completion of the pipeline; and it does it by sending the Shutdown message.

The code is pretty straightforward and easy to understand. Before ClientSDK, we used the file source adapter and there was no good way to shut down the compose bundle after the index was ready. Now, it can be done in a simple and standardized way.

How To Do Car License Plate Recognition With YOLOv8 And LPD/LPR Models From Nvidia NGC

In the realm of computer vision, the ability to recognize and read license plates from vehicles is a significant advancement. This technology has numerous applications, from traffic management to security and surveillance. Three prominent models that have made this possible are YOLOv8 and the License Plate Detection/Recognition (LPD/LPR) models from Nvidia NGC.

Continue reading How To Do Car License Plate Recognition With YOLOv8 And LPD/LPR Models From Nvidia NGC