There’s a few posts around this but are all quite out-dated. Furthermore, I’m using SwiftUI and would like to adopt the solution to be usable in a SwiftUI environment. I should preface by saying that I don’t know much about UIKit.
I’m trying to crop a AVCaptureVideoPreviewLayer to a small box in the middle of the screen. I’d like to be able to use it like a normal SwiftUI view like:
CameraPreview()
.frame(width: 200, height: 200)
Here’s my MRE. In this setup, the preview is shifted to the bottom right of the screen:
class CameraViewController: UIViewController {
var captureSession: AVCaptureSession!
var previewLayer: AVCaptureVideoPreviewLayer!
override func viewDidLoad() {
super.viewDidLoad()
captureSession = AVCaptureSession()
captureSession.sessionPreset = .photo
guard let backCamera = AVCaptureDevice.default(for: .video) else { return }
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
}
} catch {
print("Error: (error.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = .resizeAspect
previewLayer.frame = view.layer.bounds
view.layer.addSublayer(at: previewLayer)
captureSession.startRunning()
}
}
struct CameraView: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> CameraViewController {
return CameraViewController()
}
func updateUIViewController(_ uiViewController: CameraViewController, context: Context) {}
}
Another approach I’ve tried is using a mask:
CameraView()
.mask(
Rectangle()
.ignoresSafeArea()
.opacity(0.0)
.overlay(
RoundedRectangle(cornerRadius: 25)
.frame(width: 100, height: 100)
.opacity(1)
)
)
This looks fine except it’s still actually taking up the whole screen, masking gesture inputs to the background views.
Furthermore, I need to be able to actually capture photos in this box, I can manually crop the captured photo using:
func crop(toPreviewLayer layer: AVCaptureVideoPreviewLayer, withRect rect: CGRect) -> UIImage {
let outputRect = layer.metadataOutputRectConverted(fromLayerRect: rect)
var cgImage = self.cgImage!
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(
x: outputRect.origin.x * width,
y: outputRect.origin.y * height,
width: outputRect.size.width * width,
height: outputRect.size.height * height)
cgImage = cgImage.cropping(to: cropRect)!
let croppedUIImage = UIImage(
cgImage: cgImage,
scale: self.scale,
orientation: self.imageOrientation
)
return croppedUIImage
}
But it’s error-prone to try and get the CGRect
of the RoundedRectangle
on screen.
This seems like a very common use case but I’m going nuts trying to find a solution.