I’ve recently used SphericalWarperGpu in opencv and masks to transform two images from two cameras into one single image. The result goes well, and after several detections on the merged image, I want to transform the resulted points back to their actual positions on the two raw images. I can use the masks that was used for merging to decide which raw image did it came from, but it’s hard for me to understand the source code of SphericalWarperGpu and performs projection backwards.
I’ve noticed cv::detail::RotationWarper has a function warpBackward(), however the function requires images as input and may performs slower than projecting only several points, and it seemly cannot be constructed from SphericalWarperGpu. Is there a way to projects points back to original position which are transformed by cv::detail::SphericalWarperGpu (or SphericalWarper)? Thanks for any thoughts!
Friedman Stepthen is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.