I’m currently working on an iOS project where I’m using AVFoundation to capture video, and I want to integrate WebRTC functionality into this setup. Specifically, I’m using the GoogleWebRTC package.
Here’s a snippet of my current setup:
private func setupAVCaptureSession() {
captureSession.sessionPreset = .high
guard let camera = AVCaptureDevice.default(for: .video) else {
print("Keine Kamera gefunden")
return
}
do {
let input = try AVCaptureDeviceInput(device: camera)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
}
} catch {
print("Fehler beim Hinzufügen der Kamera-Input: (error)")
return
}
previewLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(previewLayer)
previewLayer.frame = view.frame
print("Preview Layer hinzugefügt und konfiguriert")
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
if captureSession.canAddOutput(videoOutput) {
captureSession.addOutput(videoOutput)
}
setupCapturer()
captureSession.startRunning()
}
func setupCapturer() {
let videoSource = peerConnectionFactory.videoSource()
videoCapturer = RTCCameraVideoCapturer(delegate: videoSource)
guard let camera = AVCaptureDevice.default(for: .video) else {
print("Keine Kamera gefunden")
return
}
let formats = RTCCameraVideoCapturer.supportedFormats(for: camera)
guard let format = formats.first else {
print("Keine unterstützten Formate gefunden")
return
}
videoCapturer?.startCapture(with: camera, format: format, fps: 30)
}
While this setup captures video and displays it on a previewLayer, I’m struggling to pass the captured video frames from my AVCaptureSession to WebRTC’s RTCVideoCapturer.
I understand that RTCCameraVideoCapturer is designed to work with camera devices directly, but I need to know how to either:
- Pass the entire AVCaptureSession directly to WebRTC’s capturer.
Feed individual frames from AVCaptureVideoDataOutputSampleBufferDelegate to WebRTC for transmission. - Is there a recommended approach or any workaround that would allow me to use my existing AVCaptureSession while leveraging WebRTC’s video streaming capabilities? Any insights or code examples would be greatly appreciated!
Thanks!