I’m following the “AVCam: Building a camera app” tutorial from Apple (new in iOS18) and if I’m not mistaken, setting up the capture session requires selecting a single capture device. Going through the list of DeviceType
, I see that certain devices like builtInLiDARDepthCamera
are separate device types from camera types like builtInTripleCamera
. This implies that I have to choose between the two.
Now, please correct me if I’m wrong but my high level understanding of what Apple has shown in their iOS demos (I don’t have sources for this) is that newer iPhones are capable of utilizing multiple inputs, including depth information, to capture higher quality photos. My iPhone 15 pro has the option to save photos as ProRaw photos, which to my limited knowledge, is basically a photo taken using more “sensors”, including depth information.
Could someone help me understand what I’m mistaken? More generally, how would someone go about making apps that’s capable of taking awesome photos? — IMO, it’s non-trivial bridging the gap between what Apple’s marketing team has put out and how to achieve any of that as an iOS dev.