You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
We are working with gstreamer pluging V1.16, using cameraundistort module, it works on CPU, and we are getting 30FPS, the problem is that we need to moved into CPU and then back to GPU, but we get for 30FPS video 30FPS output.
When we use deepstream-5.0 with nvdewarper module, we are working with GPU only in our pipeline, but the fps drop to 15FPS. Even when we stop all the networks and let the GPU be use only for the camera calibration, it seems to be the code latency.
When we review the needed math for distortion calibration, only intrinsic calibration is used, the math is done in advance and after crating the pixel mapping we should be running on O(1).
The question will be about the latency and if there is a way to make the code more useable for our pipeline.
Thanks.
The text was updated successfully, but these errors were encountered:
Hi,
We are working with gstreamer pluging V1.16, using cameraundistort module, it works on CPU, and we are getting 30FPS, the problem is that we need to moved into CPU and then back to GPU, but we get for 30FPS video 30FPS output.
When we use deepstream-5.0 with nvdewarper module, we are working with GPU only in our pipeline, but the fps drop to 15FPS. Even when we stop all the networks and let the GPU be use only for the camera calibration, it seems to be the code latency.
When we review the needed math for distortion calibration, only intrinsic calibration is used, the math is done in advance and after crating the pixel mapping we should be running on O(1).
The question will be about the latency and if there is a way to make the code more useable for our pipeline.
Thanks.
The text was updated successfully, but these errors were encountered: