Elements#
Links under GStreamer element name (first column of the table) contain description of element properties, in the format generated by gst-inspect-1.0 utility
Element |
Description |
---|---|
Performs object detection and [optionally] classification/segmentation/pose estimation.Inputs: ROIs (regions of interest) or full frame. Output: object bounding box detection along with prediction metadata. [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect model=$mDetect device=GPU[CPU,NPU] ! … OUT |
|
Performs object classification/segmentation/pose estimation. Inputs: ROI or full frame. Output: prediction metadata. [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect model=$mDetect device=GPU ! gvaclassify model=$mClassify device=CPU ! … OUT |
|
Executes any inference model and outputs raw results (does not interpret data and does not generate metadata). [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect model=$mDetect device=GPU ! gvainference model=$mHeadPoseEst device=CPU ! … OUT |
|
Legacy plugin.Performs audio event detection using AclNet model. |
|
Tracks objects across video frames using zero-term or short-term tracking algorithms. Zero-term tracking assigns unique object IDs and requires object detection to run on every frame. Short-term tracking allows to track objects between frames, thereby reducing the need to run object detection on each frame. [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect model=$mDetect device=GPU ! gvatrack tracking-type=short-term-imageless ! … OUT |
|
Converts the metadata structure to JSON or raw text formats, can write output to a file. |
|
Publishes the JSON metadata to MQTT or Kafka message brokers or files. [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect model=$mDetect device=GPU … ! gvametaconvert format=json … ! gvametapublish … ! … OUT |
|
Aggregates inference results from multiple pipeline branches. |
|
Provides a callback to execute user-defined Python functions on every frame, used to augment DLStreamer with user-defined algorithms (e.g. metadata conversion, inference post-processing). [eg syntax] gst-launch-1.0 … ! gvaclassify ! gvapython module={gvapython.callback_module.classAge_pp} ! … OUT |
|
Overlays the metadata on the video frame to visualize the inference results. [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect … ! gvawatermark ! … OUT |
|
Measures frames per second across multiple video streams in a single GStreamer process. [eg syntax] gst-launch-1.0 … ! decodebin ! gvadetect … ! gvafpscounter ! … OUT |
|
Adds user-defined regions of interest to perform inference on, instead of full frame. Example: monitoring traffic on a road in a city camera feed, or when split large image to smaller pieces and inference each piece (healthcare cell analytics). [eg syntax] gst-launch-1.0 … ! decodebin ! gvaattachroi roi=xtl,ytl,xbr,ybr gvadetect inference-region=1 ! … OUT |
|
Assigns unique ID to ROI via DeepSORT algorithm. |