I’m working on an ARKit face tracking application using the front-facing camera. I need to get the 3D coordinates of the center of the physical screen in world space, as accurately as possible. Here’s what I’m trying to achieve:
- Get the 3D coordinates of the center of the physical screen in world space.
- Ensure these coordinates are as close to the to the actual screen center as possible, not at a fixed distance.
- Be able to convert between 2D screen coordinates and 3D world coordinates accurately.
In other words, what’s the best way to get accurate 3D coordinates of the physical screen center in an ARFaceTrackingConfiguration setup? And how can I reliably convert between 2D screen coordinates and 3D world coordinates in this context?
Any help or guidance would be greatly appreciated!
I’ve tried using unprojectPoint() like this:
let plane = sceneView.pointOfView! // Camera node
let midpoint = CGPoint(x: screenSize.width / 2, y: screenSize.height / 2) // Screen midpoint
// Attempt to unproject the 2D midpoint back into the 3D world on the defined plane.
let worldPoint = camera!.unprojectPoint(midpoint, ontoPlane: planeTransform, orientation: .portrait, viewportSize: screenSize)
However, I’ve discovered that the plane needs to be offset by approximately 0.025 units along the camera’s forward vector to achieve accurate results when converting between 2D and 3D coordinates. Without this offset, the conversion yields significant inaccuracies.
What’s causing this need for an offset, and is there a more robust way to handle this conversion without relying on a manual adjustment? Are there any ARKit functions or techniques that can provide more precise results in this scenario?
Edit: Looking for 1:1 2D screen coordinates to 3D screen coordinates and vice versa.
Trey Tuscai is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
3