Proper way to handle Sequencial Data for visual-odometry when using PyTorch DDP
I’m working on predicting camera motion trajectories using RGB images or other intermediate representations like optical flow. I’m predicting transformations of relative motion.My targets are transformation matrixes(or any equvalent representation) between consecutive frames (image1 -> image2 -> image3).