This is a rework of the audio join procedure whithout the explict listen
only separation in mind. It's supposed to be used in conjunction with
the transparent listen only feature so that the distinction between
modes is seamless with minimal server-side impact. An abridged list of
changes:
- Let the user pick no input device when joining microphone while
allowing them to set an input device on the fly later on
- Give the user the option to join audio with no input device whenever
we fail to obtain input devices, with the option to try re-enabling
them on the fly later on
- Add the option to open the audio settings modal (echo test et al)
via the in-call device selection chevron
- Rework the SFU audio bridge and its services to support
adding/removing tracks on the fly without renegotiation
- Rework the SFU audio bridge and its services to support a new peer
role called "passive-sendrecv". That role is used by dupled peers
that have no active input source on start, but might have one later
on.
- Remove stale PermissionsOverlay component from the audio modal
- Rework how permission errors are detected using the Permissions API
- Rework the local echo test so that it uses a separate media tag
rather than the remote
- Add new, separate dialplans that mute/hold FreeSWITCH channels on
hold based on UA strings. This is orchestrated server-side via
webrtc-sfu and akka-apps. The basic difference here is that channels
now join in their desired state rather than waiting for client side
observers to sync the state up. It also mitigates transparent listen
only performance edge cases on multiple audio channels joining at
the same time.
The old, decoupled listen only mode is still present in code while we
validate this new approach. To test this, transparentListenOnly
must be enabled and listen only mode must be disable on audio join so
that the user skips straight through microphone join.
* Refactor: Make bundle using webpack
* Fix: restore after install codes and a few settings
* Fix: build script folder permission
* Refactor: Remove support to async import on audio bridges
* Upgrade npm using nvm
* Avoid questions on npm ci execution
* Let npm ci install dev dependencies (as we need the build tools here)
* Fix: enconding
* Fix: old lock files
* Remove: bbb-config dependency to bbb-html5 service, bbb-html5 isn't a service anymore
* Fix: TS errors
* Fix: eslint
* Fix: chat styles
* npm install with "lockfileVersion": 3 (newer npm)
* build: allow nodejs 22
* node 22; drop meteor from CI and bbb-conf
* TEMP: use bbb-install without mongo but with node 22 and newer image
* build: relax nodejs condition to not trip 22.6
* build: ensure dir /usr/share/bigbluebutton/nginx exists
* init sites-available/bbb; drop disable-transparent-
* nginx complaining of missing file and ;
* TMP: print status of services
* WIP: tweak nginx location to debug
* Fix: webcam widgets alignments
* akka-apps -- update location of settings.yml
* build: add locales path for nginx
* docs and config changes for removal of meteor
* Fix: build encoding and locales enpoint folder path
* build: set wss url for media
* Add: Enable minimizer and modify to Terser
* Fix: TS errors
---------
Co-authored-by: Tiago Jacobs <tiago.jacobs@gmail.com>
Co-authored-by: Anton Georgiev <anto.georgiev@gmail.com>
Co-authored-by: Anton Georgiev <antobinary@users.noreply.github.com>
If the autoplay block is triggered in listen only, the connection timer
keeps ticking even if the user correctly accepts the audio play prompt.
That causes an audio re-connect once the timeout expires.
Clear the connection timer if the audio bridge starts with
NotAllowedError as a soft error. For connection purposes, the audio join
procedure worked. The autoplay thing is at the UI/UX level, not WebRTC.
This is an initial, experimental implementation of the feature proposed in
https://github.com/bigbluebutton/bigbluebutton/issues/14021.
The intention is to phase out the explicit listen only mode with two
overarching goals:
- Reduce UX friction and increase familiarity: the existence of a separate
listen only mode is a source of confusion for the majority of users
Reduce average server-side CPU usage while also making it possible for
having full audio-only meetings.
The proof-of-concept works based on the assumption that a "many
concurrent active talkers" scenario is both rare and not useful. With
that in mind, this including two server-side triggers:
- On microphone inactivity (currently mute action that is sustained for
4 seconds, configurable): FreeSWITCH channels are held (which translates
to much lower CPU usage, virtually 0%). Receiving channels are switched,
server side, to a listening mode (SFU, mediasoup).
* This required an extension to mediasoup two allow re-assigning producers
to already established consumers. No re-negotiation is done.
- On microphone activity (currently unmute action, immediate):
FreeSWITCH channels are unheld, listening mode is deactivated and the
mute state is updated accordingly (in this order).
This is *off by default*. It needs to be enabled in two places:
- `/etc/bigbluebutton/bbb-webrtc-sfu/production.yml` ->
`transparentListenOnly: true`
- End users:
* Server wide: `/etc/bigbluebutton/bbb-html5.yml` ->
`public.media.transparentListenOnly: true`
* Per user: `userdata-bbb_transparent_listen_only=true`
SFU based audio is missing connection timers, which means the join
procedure can go on indefinitely in a couple of scenarios.
Refactor the connection timers added for re-connections in the SFU audio
bridge and make them valid for the first try as well.
Make 1010 errors (connection timeout) retriable when retryThroughRelay
is enabled.
1007 errors are still a large fraction of our overall audio join error
rate. This usually indicates some sort of firewall block or UDP issues
carrier networks. I can't figure out why some scenarios won't trickle
down to relay candidates though - I'm leaning to scenarios where STUN
packets with USE-CANDIDATE are being mangled/lost along the way or
something else that borks the (already fragile) conn checks for ICE-lite
implementations.
Add a new feature called retryThroughRelay which triggers a retry with
iceTransportPolicy=relay whenever audio fails to join with a 1007 error.
The goal is to force relay usage to try and bypass 1007s scenarios that
still happen.
Disabled by default.
The current Vosk CC provider does not support stereo mic streams
(pending investigation as to why).
This commits makes sure stereo is forcefully disabled via SDP munging
only when transcription is active and using Vosk. Having it disabled
in the server side (FreeSWITCH) is not enough because the stereo parameter
is client mandated and replicated by FS on its answer. So we need to
make sure it's always disabled for the time being.
SFU audio does munging server side (and stereo is always off), so no changes
needed there.
The rest of the providers (except WebSpeech) need to be validated against
stereo audio as well.
This is also intended to be temporary - ideally this needs to be fixed in
mod_audio_fork/Vosk/wherever this is breaking.
Audio state callback and remote media setup both depend on FS's state
(comes through Meteor) and the ICE state (local, peer connection). The
caveat: FS's state can come delayed on reconnection scenarios because
Meteor's websocket generally takes significantly longer to re-connect than
the peer connection, which means the ICE state gets completed way before FS
is flagged as ready.
The practical issue: while outbound audio (client -> FS) will work, inbound
audio (FS -> client) won't _just because it wasn't played_ (even though
data is coming through).
This commit decouples the remote media setup step from the state
through:
- Setup remote media when ICE state is completed
- Run the state callback only after FS is flagged as ready. This
should maintain the UI states consistent across client-server.
Keep in mind the assumption that if FS is ready, ICE is completed by
consequence.
There's an edge case in finnicky networks where ALG-like firewalls
tamper with USE-CANDIDATE STUN packets and, consequently, bork ICE-lite
connectivity establishment. The odd part is that client-side gathering
seems to complete if intermediate STUN bindings work (before the final
USE-CANDIDATE), which may cause the peer not to generate relay
candidates == connectivity fails.
This adds the `public.kurento.gatheringTimeout` option to forcefully extend
the candidate gathering window in peers that act as offerers. The
behavior is as follows: if the flag is set (ms), the peer will wait
either the gathering completed stage or, _at most_,
public.kurento.gatheringTimeout ms before proceeding with calls chained
to setLocalDescription.
This option is disabled by default and intentionally ommited from the
base settings.yml file as to not encourage its use. Don't use it unless
you know what you're doing :).
Mostly benign, but exitAudio/forceExitAudio was throwing an unhandled
error when called on sessions with no active audio because the
underlying bridge methods did not check whether there was an active
session to stop beforehand.
There are some situations where previously set deviceIds (
local/session storage) may become stale. This causes an unexpected
behavior where audio is temporarily borked until the user clears their
local storage.
This issue has been seen more recently on Safari endpoints when switching
back-and-forth breakout rooms in environments running under iframes.
Also seen randomly on endpoints with virtual input devices.
This centralizes audio gUM calling into a single method that retries the
gUM procedure without pre-set deviceIds only if the initial call fails
due with an OverconstrainedError - hopefully circumventing the issue.
There's no rollback procedure in case a device switch fails right now,
nor does the code entrypoints that call the switching procedures wait
for resolution or failure before marking the new device as chosen. That
may cause inconsistent states in a couple of ways:
- No rollback: switch fails, audio is still on but no actual
microphone input is being transmitted
- Not waiting for resolutions: inconsistent chosen devices on failures
Device switching errors are also not surfaced to the end user
This commit:
- Adds device rollback and proper resolution/failure response
awaits to try and make the state a bit more consistent.
- Centralizes the input device switching code to be reused between
different bridges
- Centralizes device ID state management in audio-manager to try and
mantain them a bit more consistent across the board
- Surface device switching failures to the end user
- Guarantee device IDs are set to the session storage on all
appropriate scenarios
RTCRTPSender exposes DSCP marking via `networkPriority` in the encodings
configuration dictionaries. That should allow us to control
QoS priorities for different media streams, eg audio with higher network
priority than video. The only browser that implements that right
now is Chromium.
To use this, the public.app.media.networkPriorities configuration in
settings.yml. Audio, camera and screenshare priorities can be controlled
separately. For further info on the possible values, see:
- https://www.w3.org/TR/webrtc-priority/
- https://datatracker.ietf.org/doc/html/rfc8837#section-5
Sometimes the handler that listens for the state change in the callState is
not updated correctly.
In these rare cases, the state of the callstate changes directly to in_conference,
not taking the expected path: call_started -> in_echo_test -> in_conference
There are scenarios where the full audio broker (SFU) stop procedure
may be called multiple times in a very short timestamp - eg a concurrent
stop + connection failure; a timeout in the transfer procedure + a
reconnect attempt, [...]. When that happens, calls to exitAudio may throw
errors if the broker was already released - and that's not the expected
behavior.
FreeSWITCH has mDNS resolution capabilities as of 1.10.7. Having the filtering
configurable in the client allows us to field trial whether we should keep that
on or off. The default is still to filter them out because FreeSWITCH does not
resolve mDNS candidates by default (ice_resolve_candidate in switch.conf.xml).
- Remove the old listen only bridge (kurento.js), superseded by the equivalent
and equally stable (AS FAR AS LISTEN ONLY IS CONCERNED) sfu-audio-bridge
- Rename FullAudioBridge.js -> sfu-audio-bridge.js
* A more generic name that better represents the capabilities and
the nature of the bridge
* The bridge name identifier in configuration is still the same
('fullaudio')
- Remove the FreeSWITCH listen only fallback
- Temporarily disable the "trickle ICE" pair gathering feature used
in SIP.js (which was always experimental, nonstandard and disabled
by default)
- Updates to settings.yml keys in places where relevant
"default" is not an universally valid default value for deviceIds which was causing issues with Firefox and Safari in some specific scenarios where exact deviceId constraints were being used