I construct a pipeline in C++, receive an SDP offer and respond to it. It works alright with ‘just’ OPUS but sometimes the audio’s not great, so I was looking into using RED, but am having trouble getting it to work. I’ve found a way of enabling RED here (https://github.com/hissinger/gstreamer-webrtcbin-demo/blob/main/webrtc-sendrecv.c – lines 306-317), but that is for when I am making the offer, not responding.
Code snippets
For the generic OPUS, I construct the pipeline in a mix of ways (hopefully the below is enough to give the idea):
GstElement * _audio_depay = gst_element_factory_make("rtpopusdepay", "webrtcaudiodepay");
GstElement * _audio_dec = gst_element_factory_make("opusdec", "audiodecode");
GstElement * _audio_converter_caps = gst_caps_new_simple("audio/x-raw", "format", G_TYPE_STRING, "S16LE", "layout", G_TYPE_STRING, "interleaved", NULL);
std::stringstream ss;
ss << "webrtcbin name=webrtcbin "
<< "appsrc name=webrtcaudioappsrc ! audioconvert ! audioresample ! audiorate ! "
<< "opusenc inband-fec=true audio-type=restricted-lowdelay bitrate-type=cbr packet-loss-percentage=100 bandwidth=1102 dtx=true ! rtpopuspay dtx=true pt=111 ! "
<< application/x-rtp,media=audio,encoding-name=OPUS,payload=111 ! webrtcbin. ";
GError * error = nullptr;
_pipeline = gst_parse_launch(ss.str().c_str(), &error);
GstElement * _webrtc = gst_bin_get_by_name(GST_BIN(_pipeline), "webrtcbin");
I then listen to the pad-added
signal from _webrtcbin
to link the elements I’ve constructed by hand to make a pipeline that looks like
webrtcbin -> rtpopusdepay -> opusdec -> audioconvert -> appsink
Trying to use RED
The SDP offer I receive from my browser contains a RED entry (I’ve edited the below to save space):
m=audio 9 UDP/TLS/RTP/SAVPF 111 63 9 102 0 8 13 110 126
...
a=rtpmap:111 opus/48000/2
a=rtcp-fb:111 transport-cc
a=fmtp:111 minptime=10;useinbandfec=1
a=rtpmap:63 red/48000/2
a=fmtp:63 111/111
...
I’ve tried munging this offer to place 63
ahead of 111
in the m
line, and I then submit that to my webrtcbin to produce the answer.
GstSDPMessage * sdp = nullptr;
gst_sdp_message_new_from_text(sdp_data.c_str(), &sdp); // sdp_data is the string sent from the browser after I've munged it
GstWebRTCSessionDescription * offer = nullptr;
offer = gst_webrtc_session_description_new(GST_WEBRTC_SDP_TYPE_OFFER, sdp);
auto promise = gst_promise_new();
g_signal_emit_by_name(_webrtc, "set-remote-description", offer, promise);
gst_promise_wait(promise);
gst_webrtc_session_description_free(offer);
gst_promise_unref(promise);
promise = gst_promise_new_with_change_func(on_answer, this, NULL);
g_signal_emit_by_name(_webrtc, "create-answer", NULL, promise);
However, looking at the answer I produce, it still defaults to using OPUS (111). Maybe webrtcbin
uses the ordering of the rtpmap
(though I don’t think it should, right?), but I think it’s more likely to be because the webrtcbin
is linked to opus-based elements, and therefore the linking necessitates using OPUS.
I’ve tried manually adding elements to the pipeline, so that the pipeline looks like
appsrc -> opusenc -> rtpopuspay -> rtpulpfecenc -> rtpredenc -> webrtcbin
webrtcbin -> rtpreddec -> rtpstorage -> rtpssrcdemux -> rtpjitterbuffer -> rtpulpfecdec -> rtpopusdepay -> opusdec -> audioconvert -> appsink
(Trying to recreate the pipelines mentioned in https://gstreamer.freedesktop.org/documentation/rtp/rtpredenc.html?gi-language=c and https://gstreamer.freedesktop.org/documentation/rtp/rtpreddec.html?gi-language=c#rtpreddec)
At the moment this produces a receive only element, with the warning when I run:
0:00:13.919634921 41106 0x7756b8004940 WARN webrtcbin gstwebrtcbin.c:4636:_create_answer_task:<webrtcbin> did not find compatible transceiver for offer caps application/x-rtp, media=(string)audio, payload=(int)63, clock-rate=(int)48000, encoding-name=(string)RED, encoding-params=(string)2, 111/111=(string)1; application/x-rtp, media=(string)audio, payload=(int)111, clock-rate=(int)48000, encoding-name=(string)OPUS, encoding-params=(string)2, minptime=(string)10, useinbandfec=(string)1, rtcp-fb-transport-cc=(boolean)true; application/x-rtp, media=(string)audio, payload=(int)9, clock-rate=(int)8000, encoding-name=(string)G722; application/x-rtp, media=(string)audio, payload=(int)0, clock-rate=(int)8000, encoding-name=(string)PCMU; application/x-rtp, media=(string)audio, payload=(int)8, clock-rate=(int)8000, encoding-name=(string)PCMA; application/x-rtp, media=(string)audio, payload=(int)13, clock-rate=(int)8000, encoding-name=(string)CN; application/x-rtp, media=(string)audio, payload=(int)110, clock-rate=(int)48000, encoding-name=(string)TELEPHONE-EVENT; application/x-rtp, media=(string)audio, payload=(int)126, clock-rate=(int)8000, encoding-name=(string)TELEPHONE-EVENT, will only receive
The frustrating thing is that when I build a dot file of the pipeline, it looks like the webrtcbin
contains the red encoders and decoders, so I would think the best solution would be to somehow use those, without me repeating myself, but I don’t know how to turn those on?