'Improve performance Gstreamer pipeline for webrtc in Jetson AGX
I have one applications in c++ to get the video using gstreamer from a camera and then send the video via UDP to another application in c++ that gets the video and makes the restreaming using webrct. Everything under a jetson AGX.
If i get the data from the camera in H264 and send it dirrectlly the videos works perfect in 4k:
First pipeline to get the video
pipe_source = "rtspsrc location=rtsp://192.168.1.162/z3-1.mp4 ! application/x-rtp,encoding-name=H264,profile=baseline ! ";
pipe_sink = "udpsink host=224.1.1.1 port=5000 sync=false auto-multicast=true";
launch_pipeline = pipe_source + pipe_sink;
Second pipeline to get the video and send it via webrtc
pipeline = "udpsrc multicast-group=224.1.1.1 auto-multicast=true port=5000 ! application/x-rtp,encoding-name=H264,profile=baseline,media=video,clock-rate=90000,payload=96 ! webrtcbin async-handling=true name=sendrecv";
However I can not do it in 4K if i want to make some precessing in the input video as i need to decode (and then encode) the frames prior sending the video by udp
pipe_source = "rtspsrc location=rtsp://192.168.1.162/z3-1.mp4 ! application/x-rtp,encoding-name=H265 !";
pipe_decode = "rtph265depay ! video/x-h265 ! nvv4l2decoder enable-max-performance=true ! ";
pipe_process = "nvvidconv output-buffers=5 name=myconv ! video/x-raw(memory:NVMM), format=RGBA ! nvvidconv output-buffers=5 ! video/x-raw(memory:NVMM), format=NV12 ! queue max-size-bytes=0 max-size-time=500 !";
pipe_encode ="nvv4l2vp9enc maxperf-enable=true ! video/x-vp9 ! rtpvp9pay !";
pipe_sink = "udpsink host=224.1.1.1 port=5000 sync=false auto-multicast=true";
launch_pipeline = pipe_source + pipe_decode + pipe_process + pipe_encode + pipe_sink;
In this pipeline for the source i have tried both h264/h265. Morevoere, for the encode I have tried using h264 instead of VP9, but it looks like H264 is much more slower. This is why i have used VP9 in the encoding part.
In this case the second pipeline is:
pipeline = "udpsrc multicast-group=224.1.1.1 auto-multicast=true port=5000 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=VP9,payload=96, framerate=25/1 ! queue max-size-bytes=0 max-size-time=0 ! webrtcbin async-handling=true name=sendrecv";
My problem is that with this configuration i can not get video in 4k with good quality. I get the video but in a poor quality, i assume that the VP9 is changing the bitrate to have a continous video without losing frames. I have tried by giving the bit rate in the encoding part, this makes an improvement of the image quaity but i lose some frames.
If i use 1080 then i get the video in a good quality, therefore i have the feeling that is a matter of the processing capability of the hardware (i am using a jetson AGX) on doing the decoding/encoding.
Someone knows a way to improve the performance of the pipeline? I am not sure if i am doing something "useless" in the pipeline that is making the whole process slow for a 4k video.
Solution 1:[1]
I'm unsure what is your real use case, but the following might help you to investigate further.
I don't have a 4K IP cam, so here I'll simulate using a CSI Cam capturing in 1080p@30 fps and upscaling to 3840x2160 and streaming as H265 encoded with RTSP server test-launch:
./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw(memory:NVMM), width=3840, height=2160, pixel-aspect-ratio=1/1 ! nvv4l2h265enc insert-vui=true insert-sps-pps=1 insert-aud=1 maxperf-enable=1 bitrate=30000000 ! h265parse ! video/x-h265, stream-format=byte-stream ! rtph265pay name=pay0 pt=96 "
Note that this encodes into H265 format with 30Mb/s bitrate. You may first check if you can get good quality image from your source and adjust source bitrate to its best. Assuming your monitor supports 1080p@30:
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test latency=500 ! application/x-rtp,encoding-name=H265 ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1920,height=1080 ! xvimagesink
When ok, let's go further.
Here decoding RTSP H265 source and re-encoding into VP9/RTP/UDP:
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test latency=500 ! application/x-rtp,encoding-name=H265 ! rtph265depay ! h265parse ! nvv4l2decoder enable-max-performance=1 ! queue ! nvv4l2vp9enc maxperf-enable=true bitrate=30000000 ! video/x-vp9 ! rtpvp9pay ! udpsink host=224.1.1.1 port=5000 auto-multicast=true buffer-size=32000000
Note the VP9 30 Mb/s bitrate. You may have to adjust as well.
For checking, you may display with (assuming X running):
gst-launch-1.0 udpsrc multicast-group=224.1.1.1 auto-multicast=true port=5000 buffer-size=32000000 ! application/x-rtp,encoding-name=VP9 ! rtpjitterbuffer latency=500 ! rtpvp9depay ! video/x-vp9 ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1920,height=1080 ! xvimagesink
EDIT Jan 29th, 2022:
You may further try the following, seems working fine with my AGX Xavier running L4T R32.6.1:
- Application for reading RTSP stream with H265 video, decoding, encoding into VP9 and streaming to localhost with RTP/UDP:
#include <gst/gst.h>
int main (gint argc, gchar * argv[])
{
gst_init (&argc, &argv);
GMainLoop *loop = g_main_loop_new (NULL, FALSE);
/* Create the pipeline...this will negociate unspecified caps between plugins */
const gchar *pipeline1 = "rtspsrc location=rtsp://127.0.0.1:8554/test latency=500 ! application/x-rtp,encoding-name=H265 ! rtph265depay ! h265parse ! nvv4l2decoder enable-max-performance=1 ! queue ! nvv4l2vp9enc maxperf-enable=true bitrate=30000000 ! video/x-vp9 ! rtpvp9pay ! udpsink host=127.0.0.1 port=5000 auto-multicast=0 buffer-size=32000000 ";
GstElement *pipeline = gst_parse_launch (pipeline1, NULL);
if (!pipeline) {
g_error ("Failed to create pipeline\n");
exit(-1);
}
/* Ok, successfully created the pipeline, now start it */
gst_element_set_state (pipeline, GST_STATE_READY);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* wait until it's up and running or failed */
if (gst_element_get_state (pipeline, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE) {
g_error ("Failed to go into PLAYING state");
exit(-2);
}
g_print ("Running ...\n");
g_main_loop_run (loop);
return 0;
}
Build with: gcc -Wall -o gst_testlaunch1 -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include gst_testlaunch1.cpp -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0
- Application for reading VP9 encoded video from RTP/UDP on localhost, decoding and rescaling with to 1080p nvvidconv then displaying in X while measuring fps:
#include <gst/gst.h>
int main (gint argc, gchar * argv[])
{
gst_init (&argc, &argv);
GMainLoop *loop = g_main_loop_new (NULL, FALSE);
/* Create the pipeline...this will negociate unspecified caps between plugins */
const gchar *pipeline2 = "udpsrc auto-multicast=0 port=5000 buffer-size=32000000 ! application/x-rtp,encoding-name=VP9 ! rtpjitterbuffer latency=500 ! rtpvp9depay ! video/x-vp9 ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1920,height=1080 ! fpsdisplaysink video-sink=xvimagesink text-overlay=0 ";
GstElement *pipeline = gst_parse_launch (pipeline2, NULL);
if (!pipeline) {
g_error ("Failed to create pipeline\n");
exit(-1);
}
// This will output changes and is required to display fps in terminal, you may remove it later to make it quiet.
g_signal_connect(pipeline, "deep-notify", G_CALLBACK(gst_object_default_deep_notify), NULL);
/* Ok, successfully created the pipeline, now start it */
gst_element_set_state (pipeline, GST_STATE_READY);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* wait until it's up and running or failed */
if (gst_element_get_state (pipeline, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE) {
g_error ("Failed to go into PLAYING state");
exit(-2);
}
g_print ("Running ...\n");
g_main_loop_run (loop);
return 0;
}
Build with: gcc -Wall -o gst_testlaunch2 -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include gst_testlaunch2.cpp -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0
Having the 4K-H265 RTSP source available, running first gst_testlaunch1 in a terminal and then gst_testlaunch2 in a second terminal shows the image with correct quality and it keeps 30 fps.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
