You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"The first step in calibration is to get snapshot images from all cameras."
How do I get these dewarped snapshots and what are the resolutions needed?
And with QGIS, there's this line "Make a note of the longitude and latitude of the origin (in this case, the center of the building)."
Does this mean that I have to get the coordinate of the center of the referenced TIF image?
On the nvaisle/nvspot.csv files, what do ROI and the H values represent?
How do I find the gx,gy coordinates? I tried just importing the latitute and longitude but they don't work.
Thank you.
The text was updated successfully, but these errors were encountered:
Hi, I'm having some issues with calibrating my snapshots to generate the perception spreadsheet. I'm following this guide: https://devblogs.nvidia.com/calibration-translate-video-data/ and the one on the deepstream sdk guide.
How do I get these dewarped snapshots and what are the resolutions needed?
Does this mean that I have to get the coordinate of the center of the referenced TIF image?
Thank you.
The text was updated successfully, but these errors were encountered: