I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this?
The simplified chain is like this:
- Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform)
- Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform)
- Create CIImages from both
- CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur)
- Scale up final image to metal view size
(CIFilter.lanczosScaleTransform) - Render CIImage to a MTKView using
CIRenderDestination
From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view?
Any pointers greatly appreciated!
5