In my chrome extension, I am using AudioContext
and AudioNode
to merge two streams into a single stream, where one is on the left channel and the other on the right channel. I’d like to also gate (mute) the right channel when the left channel volume is above a certain threshold before merging for recording purposes. The real world use case for this is that the left stream is a direct audio recording of the audio playing through the current tab and the right stream is the microphone input. If a user is not wearing headphones then the microphone input picks up a (slightly timeshift-delayed) bleed of the stream playing and I want to mute that out. In my use case, the recording will always be a two-way conversation so there is practically no need for both people to be speaking at once, and in that case I would still preferrentially prioritize the left stream.
Here’s my code to merge the audio streams. I believe that AudioWorklet can help me process the stream in realtime before recording it, or perhaps using a CustomNode – but am failing to find enough instructive examples using these Audio APIs.
Thanks for any direction.