This is a rework of the audio join procedure whithout the explict listen
only separation in mind. It's supposed to be used in conjunction with
the transparent listen only feature so that the distinction between
modes is seamless with minimal server-side impact. An abridged list of
changes:
- Let the user pick no input device when joining microphone while
allowing them to set an input device on the fly later on
- Give the user the option to join audio with no input device whenever
we fail to obtain input devices, with the option to try re-enabling
them on the fly later on
- Add the option to open the audio settings modal (echo test et al)
via the in-call device selection chevron
- Rework the SFU audio bridge and its services to support
adding/removing tracks on the fly without renegotiation
- Rework the SFU audio bridge and its services to support a new peer
role called "passive-sendrecv". That role is used by dupled peers
that have no active input source on start, but might have one later
on.
- Remove stale PermissionsOverlay component from the audio modal
- Rework how permission errors are detected using the Permissions API
- Rework the local echo test so that it uses a separate media tag
rather than the remote
- Add new, separate dialplans that mute/hold FreeSWITCH channels on
hold based on UA strings. This is orchestrated server-side via
webrtc-sfu and akka-apps. The basic difference here is that channels
now join in their desired state rather than waiting for client side
observers to sync the state up. It also mitigates transparent listen
only performance edge cases on multiple audio channels joining at
the same time.
The old, decoupled listen only mode is still present in code while we
validate this new approach. To test this, transparentListenOnly
must be enabled and listen only mode must be disable on audio join so
that the user skips straight through microphone join.
There's a race condition that may cause a client crash whenever a
connectionstatechange callback is cleaned up in a peer without a
valid peer connection present in our custom RTCPeerConnection wrapper.
Check for peerConnection availability in the WebRtcPeer wrapper before
trying to clean up its connectionstatechange callback.
When a sendrecv peer acts as the answerer, gUM is only called _after_
the remote offer is received. This is fine, but the error handling runs
different in that scenario in a way that eventual gUM errors are treated
as negotiation errors, leading to inconsistencies when surfacing the
error to end users.
If a peer is acting as answerer and is a transceiver, acquire the local
streams _before_ actual negotiation so that gUM errors are surfaced
correctly (and we spare uneeded negotiation steps).
This is an initial, experimental implementation of the feature proposed in
https://github.com/bigbluebutton/bigbluebutton/issues/14021.
The intention is to phase out the explicit listen only mode with two
overarching goals:
- Reduce UX friction and increase familiarity: the existence of a separate
listen only mode is a source of confusion for the majority of users
Reduce average server-side CPU usage while also making it possible for
having full audio-only meetings.
The proof-of-concept works based on the assumption that a "many
concurrent active talkers" scenario is both rare and not useful. With
that in mind, this including two server-side triggers:
- On microphone inactivity (currently mute action that is sustained for
4 seconds, configurable): FreeSWITCH channels are held (which translates
to much lower CPU usage, virtually 0%). Receiving channels are switched,
server side, to a listening mode (SFU, mediasoup).
* This required an extension to mediasoup two allow re-assigning producers
to already established consumers. No re-negotiation is done.
- On microphone activity (currently unmute action, immediate):
FreeSWITCH channels are unheld, listening mode is deactivated and the
mute state is updated accordingly (in this order).
This is *off by default*. It needs to be enabled in two places:
- `/etc/bigbluebutton/bbb-webrtc-sfu/production.yml` ->
`transparentListenOnly: true`
- End users:
* Server wide: `/etc/bigbluebutton/bbb-html5.yml` ->
`public.media.transparentListenOnly: true`
* Per user: `userdata-bbb_transparent_listen_only=true`
There's an edge case in finnicky networks where ALG-like firewalls
tamper with USE-CANDIDATE STUN packets and, consequently, bork ICE-lite
connectivity establishment. The odd part is that client-side gathering
seems to complete if intermediate STUN bindings work (before the final
USE-CANDIDATE), which may cause the peer not to generate relay
candidates == connectivity fails.
This adds the `public.kurento.gatheringTimeout` option to forcefully extend
the candidate gathering window in peers that act as offerers. The
behavior is as follows: if the flag is set (ms), the peer will wait
either the gathering completed stage or, _at most_,
public.kurento.gatheringTimeout ms before proceeding with calls chained
to setLocalDescription.
This option is disabled by default and intentionally ommited from the
base settings.yml file as to not encourage its use. Don't use it unless
you know what you're doing :).
Same rationale as in video-provider's commit
(34fa37ae4f092af4a5aef0cf01d96c033d97473c).
This commit does the following:
- Implement actual heartbeat checks to trigger reconnects when
necessary
- Properly catch and log WebSocket.send errors
There may be other bridges may not need to force relay traffic on
firefox as @prlanzarin pointed out. So set the default to a
configuration that works out of the box but leave other choices for the
operator.
The option is moved from kurento namespace to media next to the general
forceRelay option.
Firefox has a buggy ICE implementation and needs WebRTC media traffic to
be routed through a turn server to work reliably with mediasoup.
Use the information fetched by the STUN API to determine if the operator
has configured a turn server. If there is one force firefox to use it.
Closes#16164
RTCRTPSender exposes DSCP marking via `networkPriority` in the encodings
configuration dictionaries. That should allow us to control
QoS priorities for different media streams, eg audio with higher network
priority than video. The only browser that implements that right
now is Chromium.
To use this, the public.app.media.networkPriorities configuration in
settings.yml. Audio, camera and screenshare priorities can be controlled
separately. For further info on the possible values, see:
- https://www.w3.org/TR/webrtc-priority/
- https://datatracker.ietf.org/doc/html/rfc8837#section-5
Use the built-in getLocalStream from the peer wrapper instead (which
relies on getSenders - the proper, spec-compliant way).
Two different issues are addressed:
- RTCPeerConnection.getLocalStreams is a pre-1.0 WebRTC spec which is
deprecated and not recommended.
- Fixed an issue where a switch from full audio to listen only could
cause the latter to be rejected with an error 1004 in rare scenarios.
There could be a race condition where the local getDisplayMedia stream ends
(eg via Chrome`s stop sharing toast) while the broker hasn't finished starting.
That would lead to a scenario where the broker wouldn't emit an end event,
causing screen sharing to be flagged as started with a blank/invalid stream.
- Remove the old listen only bridge (kurento.js), superseded by the equivalent
and equally stable (AS FAR AS LISTEN ONLY IS CONCERNED) sfu-audio-bridge
- Rename FullAudioBridge.js -> sfu-audio-bridge.js
* A more generic name that better represents the capabilities and
the nature of the bridge
* The bridge name identifier in configuration is still the same
('fullaudio')
- Remove the FreeSWITCH listen only fallback
- Temporarily disable the "trickle ICE" pair gathering feature used
in SIP.js (which was always experimental, nonstandard and disabled
by default)
- Updates to settings.yml keys in places where relevant
- forceRelayOnFirefox: whether TURN/relay usage should be forced to work
around Firefox's lack of support for regular nomination when dealing with
ICE-litee peers (e.g.: mediasoup).
* See: https://bugzilla.mozilla.org/show_bug.cgi?id=1034964
- iOS endpoints are ignored from the trigger because _all_ iOS browsers
are either native WebKit or WKWebView based (so they shouldn't be affected)
This commit allows user to join/leave audio using the fullaudio bridge.
This is still under development, but to use this now we must set values of
skipCheck to false, and defaultFullAudioBridge to fullaudio. This
depends on newest version of bbb-webrtc-sfu
ICE lite servers (eg mediasoup) dont need candidates signaled out-of-band; neither does KMS in certain scenarios
Disable their signaling saves us some ticks in bbb-webrtc-sfu and some bandwidth all around
Done to avoid false positives where the stream would transition to the unhealthy UI state (or flicker between states) due to disconnected also triggering it, which is not fatal. Disconnected has to be used alongside getStats heuristics, which I removed due to issues with it, so I´m hardening the transition to fatal states only