I’ve been working a lot with OpenPnP recently (which is great!), and have had my interest peaked enough to try and really understand it’s workings.
I am trying to produce a basic transformation of a PCB with 3 fiducials, and 3 features, in C# with OpenCVSharp.
I’ve modelled the geometry in some CAD software to sanity check again, here is my ‘PCB’.
F1-3 are the 3 fiducials, with F1 being 0,0 of the PCB coordinate system. P1-3 are the 3 ‘features’ on the PCB that I’m interested in.
Here is that ‘PCB’ superimposed on a ‘machine’, where the green dot is the machine’s 0,0. So from this I’ve essentially got already what the 3 features locations would be, relative to the machine 0,0.
I think I need to do an ‘affine transform’, and over a few hours have pulled this together.
using OpenCvSharp;
namespace affinetransform
{
class Program
{
static void Main(string[] args)
{
// Define fiducials in PCB coordinates
Point2f[] pcbFiducials = new Point2f[]
{
new Point2f(0, 0),
new Point2f(100, 0),
new Point2f(100, -80)
};
// Corresponding fiducial points as measured in machine coordinates
Point2f[] cameraFiducials = new Point2f[]
{
new Point2f(190.62f, -83.7f),
new Point2f(290.24f, -74.99f),
new Point2f(297.21f, -154.68f)
};
// Compute the affine transformation matrix
Mat affineTransform = Cv2.GetAffineTransform(InputArray.Create(pcbFiducials), InputArray.Create(cameraFiducials));
// Define the 3 'features' on the PCB we're interested in, in PCB coordinates
Point2f[] pcbFeatures = new Point2f[]
{
new Point2f(24, -18),
new Point2f(35, -62),
new Point2f(74, -28)
};
// Convert feature points to a Mat object, because we need to pass Mat type to the Transform method
Mat pcbFeaturesMat = new Mat( pcbFeatures.Length, // Rows
2, // Columns
MatType.CV_32FC2,
pcbFeatures); // The features on the PCB we want to transform
// Transform feature points to machine/camera coordinates
Mat cameraFeaturesMat = new Mat();
Cv2.Transform(pcbFeaturesMat, cameraFeaturesMat, affineTransform);
// Save the transformed points to a CSV file
SavePointsToCsv(cameraFeaturesMat, @"C:usersuserdesktopcameraFeatures.csv");
}
static void SavePointsToCsv(Mat points, string filename)
{
using (var writer = new System.IO.StreamWriter(filename))
{
writer.WriteLine("X,Y");
for (int i = 0; i < points.Rows; i++)
{
float x = points.At<float>(i, 0);
float y = points.At<float>(i, 1);
writer.WriteLine($"{x},{y}");
}
}
Console.WriteLine($"Saved transformed points to {filename}");
}
}
}
And this is the transformed data it outputs, which is not completely wrong. As you can see from my images, 216.09, 230.89 and 266.78 are the 3 correct transformed values of X in that table, but in the wrong positions – with the remaining 3 data points just being the machine X position of F1, repeated 3 times.
X,Y
216.09705,230.88875
266.7783,190.62
190.62,190.62
Completely stumped, and hoping some smart OpenCV folk can point me in the right direction!