The most relevant stackoverflow post from several years ago is the following where they attempt to use ARKit for multiple cameras, but in my situation, I do need ARKit functionality in the selfie camera feed: ARKit and AVCamera simultaneously
I’d like to know if it’s possible to run ARKit alongside another camera session, using the front-facing camera. I’m also interested if there is a low-level solution that bypasses some of the predefined functionality of ARKit to make this work, but still gives me a similar functionality of using ARKit in the rear-facing camera while running a video feed with the front-facing camera. I do not need the front-facing camera to have ARKit functionality, I just want a picture-in-picture of a regular front-facing/selfie video feed while an ARKit session is running in the rear-camera feed. It’s essentially simultaneous capture as seen in Apple’s sample code here, except they don’t implement an ARSession with it: https://developer.apple.com/documentation/avfoundation/capture_setup/avmulticampip_capturing_from_multiple_cameras
The viewcontroller.swift script I have below “works”, but it seems like there may be a resource conflict because the front-facing camera will begin its feed, but then it freezes once the rear-facing camera using ARKit begins its feed; so I’m essentially stuck with an image for the front-facing camera.(See image below)
What I’ve tried (I got the same results with both of these methods/See image below):
- Originally I tried the AVCaptureMulticam session with only the front/selfie camera because ARView (back camera view) from ARKit has it’s own AVCapture it automatically runs
- I tried putting both the ARView and the UIView (front/selfie camera) into a AVCaptureMultiCamSession
import SwiftUI
import RealityKit
import UIKit
import ARKit
import AVFoundation
class ViewController : UIViewController {
@IBOutlet var arView: ARView!
@IBOutlet var frontCameraView: UIView! //small PiP view
//Capture Session for front camera
var captureSession: AVCaptureMultiCamSession?
var rearCameraLayer: AVCaptureVideoPreviewLayer?
var frontCameraLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad(){
super.viewDidLoad()
//set up ar session for the rear camera already handled by arview
//start plane detection from ARKit data
startPlaneDetection()
//set up gesture recognizer for placing 3D objects
//2d point
arView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))))
// set up the front camerra for PiP
setupFrontCamera()
}
func setupARSession()
{
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
//setting up both rear & selfie camera as a multicamera session
func setupDualCameraSession() {
captureSession = AVCaptureMultiCamSession()
// Rear camera (main AR view)
guard let rearCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back),
let rearInput = try? AVCaptureDeviceInput(device: rearCamera),
captureSession?.canAddInput(rearInput) == true else {
return
}
captureSession?.addInput(rearInput)
let rearOutput = AVCaptureVideoDataOutput()
if captureSession?.canAddOutput(rearOutput) == true {
captureSession?.addOutput(rearOutput)
}
// Add rear camera preview to arView (ensure it does not interfere with ARKit)
rearCameraLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
rearCameraLayer?.frame = arView.bounds
rearCameraLayer?.videoGravity = .resizeAspectFill
arView.layer.insertSublayer(rearCameraLayer!, at: 0) // Rear camera under AR content
// Front camera (PiP view)
guard let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
let frontInput = try? AVCaptureDeviceInput(device: frontCamera),
captureSession?.canAddInput(frontInput) == true else {
return
}
captureSession?.addInput(frontInput)
let frontOutput = AVCaptureVideoDataOutput()
if captureSession?.canAddOutput(frontOutput) == true {
captureSession?.addOutput(frontOutput)
}
// Add front camera preview to frontCameraView (PiP)
frontCameraLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
frontCameraLayer?.frame = frontCameraView.bounds
frontCameraLayer?.videoGravity = .resizeAspectFill
frontCameraView.layer.addSublayer(frontCameraLayer!)
// Start the session
captureSession?.startRunning()
}
func setupFrontCamera(){
captureSession = AVCaptureMultiCamSession()
// Front camera (PiP view)
guard let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
let frontInput = try? AVCaptureDeviceInput(device: frontCamera),
captureSession?.canAddInput(frontInput) == true else {
return
}
captureSession?.addInput(frontInput)
let frontOutput = AVCaptureVideoDataOutput()
if captureSession?.canAddOutput(frontOutput) == true {
captureSession?.addOutput(frontOutput)
}
// Add front camera preview to the small UIView (PiP view)
frontCameraLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
frontCameraLayer?.frame = frontCameraView.bounds
frontCameraLayer?.videoGravity = .resizeAspectFill
frontCameraView.layer.addSublayer(frontCameraLayer!)
// Start the session
captureSession?.startRunning()
}
func createSphere() -> ModelEntity
{
//Mesh; why do we want these as immuatables?
let sphere = MeshResource.generateSphere(radius: 0.5)
//Assign material
let sphereMaterial = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true)
//Model Entity; what's the diff w entity and mesh + material
//in the context of Swift and how it handles these?
//Model Entity
let sphereEntity = ModelEntity(mesh: sphere, materials: [sphereMaterial])
return sphereEntity
}
@objc
func handleTap(recognizer: UITapGestureRecognizer)
{
//Touch location, on screen
let tapLocation = recognizer.location(in: arView)
//Raycast (2D -> 3D)
let results = arView.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .horizontal)
//I am assuming the results return a few possible results, so first selects the first
//returned value (Verify)
if let firstResult = results.first {
//3D point (x,y,z)
let worldPos = simd_make_float3(firstResult.worldTransform.columns.3)
//Create sphere
let sphere = createSphere()
//place sphere
placeObject(object: sphere, at: worldPos)
}
}
func placeObject(object:ModelEntity, at location: SIMD3<Float>)
{
//Anchor
let objectAnchor = AnchorEntity(world: location)
// Tie model to anchor
objectAnchor.addChild(object)
// Add Anchor to scene
arView.scene.addAnchor(objectAnchor)
}
func startPlaneDetection(){
arView.automaticallyConfigureSession = true
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
}
Additionally, when running the application on XCode, I get the following warnings:
Could not locate file 'default-binaryarchive.metallib' in bundle.
Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' in bundle at
I also get the following lines appearing in my console
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:275) - (err=-12784)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-12784)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:275) - (err=-12784)