I’m using @mediapipe/tasks-vision to segment video frames, do stuff, and then enqueue stream chunks using a video transformer. The magic happens in the transform function (see the specs here: https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrackProcessor).
My issue is what part of this can I mount in a JS web worker and how? I was thinking of mounting the transformer on a worker then parsing the stream on the main thread.
I’m trying to get some ideas here on how a possible implementation could look like (not asking for code, but conceptually speaking).
I was thinking of offloading the segmenter to process the video frame and send the result back to the main thread, but I would also like to offload the painting of the canvas and the enqueuing of the chunk. Maybe I can send the source video frame to the worker, let it do anything it has to do on an OffscreenCanvas and send the chunk back to main?
Any ideas or food for thought would be appreciated!
Haven’t started with the task yet