Im trying to write a calibration programm with opencv in python. I#m utilizing two webcams and a Charucoboard and the pythonbindings for Opencv.
One of my mainproblems is, that i’m not even sure if my approach is correct and i apply the neccesary math correctly. So i will first write the general flow and calculations i do. i will ommit code, becaus eatm it’s like 1000 LOC..
- I start capturing imagepairs, i have some “quality” checking functions which considers stuff like sharpness and number of detected points. If those are met for both Images i save them, i gather like 30-40 Imagepairs this way.
I also collect object_points for the union of detected corners in image1 and image 2, so i only consider points which where detected in both images. Likewise for image_points and ids.
2.I run calibrate camera for each cam with the crresponding object/image-points to get the intrinsics.
Calculate the reproductionerror which is usually <0.01, so im pretty confidet, that my intrinsic calibration is quite good.
- i calculate the relative poses. I have for each imagepair a rotation/translation-vector describing the relative pose from the arucoboard to the corresponding camera.
i convert each rvec with Rodrigues to get the rotationmatrix (3×3) then i calculate:
r_diff = r2^T * r1
t_diff = r2^t * (t1-t2)
which should give me for each image the translation from camera 2s coordinatesystem to camera 1 coordinatesystem. collect them in a List and then i average them. Rotationmatrices are averaged by converting to quaternions and then normalize each quaternion, compute the average, normalize the result and converting back to a rotationmatrix. For translation vector i just averaged them.
To my understanding: now i have RT, which allows me to translate points from cam2 to cam 1.
-
i run bundleadjustment. i will not go much in detail here, since even if i dont run it – the results are similar off, so that seems not be the problem.
-
I validate the results. Im taking a new imagepair, pretty much same as before to aquire the imgpoints etc.
then i estimate the pose of the calibrationboard in cam2s view with solvepnp using obj-points, corners from cam2, cam2 intrinsic matrix and distcoeffs from cam 2.
then i calculate:
r_diff = r_2calibration * r2_v
t_diff = r2_calibration * tvec2 + t_calib
with:
- r_2calibration: rotation matrix from the calibration
- t_calib: translation vector from the calibration
- r2_v: rotation matrix calculated by solvepnp
- tvec2 translationvector calculated by solvepnp
So r_diff, t_diff should be the rotation/translation of the calibrationboard (points) from camera2s coordinatesystem into 1s.
after refining the poses i get a reprojection error of 2-3pixels – but in the validationstep i get 500-800pixels, scaling and rotation after visualizing it are clearly off, but i cant determine where wthe error comes from..
is the approach overall good? Or am i doing something completly wrong?
Patrick is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
5