Safari may enter a microphone permission check loop due to buggy behavior
in the Permissions API. When permission isn't permanently denied, gUM
requests fail with a NotAllowedError for a few seconds. During this time,
the permission state remains 'prompt' instead of transitioning to 'denied'
and back to 'prompt' after the timeout.
This leads to an issue where, on retrying while in 'prompt' + blocked,
the client loops through gUM checks via: 1) checking permission in the API,
2) receiving 'prompt', so trying gUM, 3) gUM fails, 4) returning to the
modal and checking permission again because the API still says 'prompt'.
Additionally, the `isUsingAudio` flag incorrectly counts the local echo
test/audio settings modal as "using audio," which toggles the flag on/off,
triggering the useEffect that causes the loop more frequently.
To fix this, remove the unnecessary AudioModal permission check that
causes the loop. Also, exclude "isEchoTest" from the `isUsingAudio` flag.
mediasoup workers are currently for general use,
regardless of stream type. This makes it difficult to give
different scheduling priorities for audio workers or prevent
noise from video streams, when our goal is to give higher
priority to audio in all ends of the system.
Set `mediasoup.dedicatedMediaTypeWorkers.audio` to `auto`. This will spin up
`ceil((min(nproc, 32) * 0.8) + (max(0, nproc - 32))` mediasoup workers
dedicated to handling audio streams.
See https://github.com/bigbluebutton/bigbluebutton/issues/20966.
The order of the configured mediasoup listenIps influence the server's candidate priorities.
There's not a big impact to this, but if we want to be friendly to RFC 8421 (and to Happy Eyeballs), we should suggest sorting IPv6 first.
Change docs so that the IPv6 config in mediasoup comes first.
Firefox incorretly displays placeholder audio device labels in the audio
settings/echo test modal when audio is disconnected. This issue arises
due to two quirks:
- Firefox does not support the 'microphone' query from the Permissions
API, causing a fallback gUM permission check.
- Firefox omits device labels from `enumerateDevices` if no streams
are active, even if gUM permission is granted. This behavior differs
from other browsers and causes our `enumerateDevices` handling to
assume that granted permission implies labels are present. This
failed since we clear streams before resolving the fallback gUM.
We now run an additional `enumerateDevices` call in `AudioSettings` when
a selected input device is defined. This ensures `enumerateDevices` is
re-run when a new stream is active, adding the correct device labels in
Firefox and improving device listings in all browsers. We've also
enhanced error handling in the enumeration process and fixed a false
positive in `hasMicrophonePermission`.
There's a regression in 3.0's I/O device selector where default output
devices are not marked as selected in the input-stream-live-selector
component unless the user explicitly selects them. This issue can also affect
input devices, although less commonly than output due to the system's ability
to infer the selected input device ID after the user joins audio.
When a device is the first in the list and no currentDeviceId is set in
the client, treat the first device returned by enumerateDevices as the
system default and hence selected, in accordance with the "Media Capture
and Streams API", Section 9.2, enumerateDevices algorithm.
* test: update and improve Ask for feedback on logout test - add more steps, check for different buttons, check POST request after sending feedback
* test: add missing data-test prop for sendFeendbackButton
* test: fix sendFeedbackButton data-test
When `listenOnlyMode` is `false` and the audio dialog's "Cancel" action is
clicked, the modal incorrectly re-renders instead of closing. Additionally,
the "Cancel" action is mislabeled as "Back."
This fix ensures the audio dialog closes properly when there are no options
to select (i.e., `listenOnlyMode=false`). The `skipAudioOptions` method is
revised to consider `listenOnlyMode` and ignore the "content" state.
Ignoring the "content" state allows options to be skipped even if a subscreen
is rendered (e.g., returning from the AudioSettings modal). The check for
`content == null` combined with `skipAudioOptions` is only necessary when
rendering the main modal. The `content == null` check has been moved to
the relevant section.
When listen only mode is deactivated and an user joins audio, an incorrect
remount of AudioSettings can trigger a spurious mute toggle. This happens
because AudioManager clears the `isConnecting` flag before setting the
`isConnected` flag. This creates a brief period where audio is flagged as
"disconnected," leading to a remount and unmount cycle that causes unwanted
mute/unmute actions due to AudioSettings' logic of muting/unmuting
active devices.
Ensure the `isConnected` flag is set before clearing the
`isConnecting` flag, preventing audio from being incorrectly flagged as
disconnected.
v2.14.1
---
* fix(screenshare): log presenter/viewer stop on all scenarios
* refactor(screenshare): add presenter data to viewer logs
* refactor(video): add video negotiation and flowing logs
* build(mediasoup): 3.14.9
Prevents changes in the presentation state while sharing media from
updating the presentation last state value. Additionally adds a missing
prop value of generic content state.
Removes the duplicate presentation pile dispatch for external video, as
an identical dispatch runs when the external video component is mounted.
This duplication did not cause any noticeable issues for the user but
resulted in the external video being added to the pile twice.
The plugin loader startup logs aren't following the logger convention,
which makes them hard to work with when post-processing logs.
The appended error message is also not useful since we're logging a
Event variant raw (which either outputs {} or nonsense like { isTrusted:
etc }).
Make the plugin "loaded" and "error" logs adhere to logger conventions.
In the future, the error log could use some tuning - there's no useful
info about root cause here.
- Move askForConfirmationOnLeave into AudioContainer
- Get rid of the unstable useMuteMicrophone hook, which returns a new reference every time the user gets muted/unmuted. Use the stable useToggleVoice hook instead.
The isSharing var is content type agnostic, so it's picking up camera as
content to flag the actions-bar button loading state.
Change the loading flag to track isScreenBroadcasting
(contentType=screen, local || global) and isScreenGloballyBroadcasting
(contentType=screen, global only). Fixes the camera as content false
positive as well as the loading state itself.
When going from "no mic" -> mic via the unmute action, the client isn't
unmuting itself after confirming the change. This is caused by not
waiting the liveChangeInputDevice method (which is a Promise) to be
fully executed before unmounting the AudioSettings modal -- the one
responsible for triggering the unmute. Since it unmounts before the
device is changed, the unmute action will be ignored because the device
is still "listen-only" (no mic).
Properly unmute audio when transitioning from "no mic" -> "mic" via the
unmute trigger by waiting for liveChangeInputDevice to resolve.
Additionally, some general improvements to UI/UX:
- Display the AudioSettings modal title when gUM is on prompt mode
- Add specific subtitles to the AudioSettings modal to 1) warn that no
mic is selected 2) Give a hint that the user can test their devices
- Always honor settings.yml's "initialHearingState" state (whether
local echo feedback should be played by default in AudioSettings)
We are missing a way to select transcription languages in some
scenarios, e.g.: listenOnlyMode=false. The audio settings UI is also not
handling item disposition very well on smaller devices.
This commit does the following to improve those blind spots:
- Add the transcription language selector to it whenever applicable
- Add proper styling to the transcription selector
- Handle small screens by changing the disposition of elements to
portrait mode
- Improve how elements are disposed to a more familiar view: Mic ->
Activity Indicator; Speaker -> Speaker test. This is more in line
with how other platforms do audio configuration/pre flight screens.
The current default setting of muteOnStart=false can lead to performance
issues in larger rooms, especially with the "transparent listen only
mode" and LiveKit, as proven by load testing. In contrast,
muteOnStart=true helps mitigate these issues but complicates the entry
process for smaller meetings, such as 1-on-1 sessions or small classes.
Additionally, the ability to override muteOnStart via the API can create
scalability issues if not managed properly.
Add a new akka-apps flag `voiceConf.muteOnStartThreshold` which acts as
a trigger that forces muteOnStart=true when the number of audio
participants reaches the configured threshold. 0 means no threshold
(disabled).
This trigger overrides any API parameter or static configurations
related to muteOnStart, as well as the client's meeting mute actions.
Pending:
- Remove MeetingStatus.meetingMute state side effects from the client's
"Mute all" and "Mute all except presenter" actions. They should no
longer alter the meeting mute state once this becomes default (just
mute users instead).
FS has an intermittent issue where unmuting a HELD channel sometimes
takes significantly (seconds) longer than usual.
conference <XYZ> unmute <WVU> simply gets stuck with no FS_API response,
which delays the unmute action whenever transparent listen only is
active.
Apparently, unholding the channel PRIOR TO unmuting works around the
issue - at least it could not be reproduced with the scenario at hand.
The unmute API already triggered an unhold in FS internally, which is
the reason why this was not done beforehand. The aforementioned issue is
way worse than an extra "redudant" API call, though.
Always unhold audio channels manually _before_ unmuting.
v2.14.0
---
* feat(mediasoup): add least-loaded worker balancing strategy
* feat(mediasoup): worker transposition (off by default)
* feat(audio): dynamic global audio bridge mechanism
* feat: livekit module, initial implementation
* feat(audio): add signaling support for passive-sendrecv role
* feat(freeswitch): overridable UA string
* feat(audio): muteOnStart detection for conditional dialplans
* feat(audio): mute passive-sendrecv clients on start
* feat(audio): support for mute-and-hold on start
* fix(audio): ignore TLO-incapable clients in hold/unhold metrics
* fix(audio): mute/unmute stuck due to inconsistent hold status
* fix(audio): hold/unhold loop when there are multiple sessions per user
* fix(audio): muteOnStart sessions incorrectly muted on breakout transfers
* fix(audio): header-provided userName incorrectly decoded
* fix(audio): stuck unmute due to borked callerIdNum
* fix(audio): correctly decode user name space chars
* !build(npm): set min Node.js version to >=18.0.0
* build: nodemon@3.1.3
* build: ws@8.17.1
* build(mediasoup): 3.14.8
FreeSWITCH incorrectly generates callerNum headers in its ESL events
when specific, special characters are in place. e.g.:
w_etc_0-bbbID-User;Semi (notice the semicolon) will be generated by
FS as w_etc_0-bbbID-User (everything after the semicolon is ignored).
This breaks callerId comparision for session matching in a few places -
one of the is the unmute/unhold toggle control.
Compare callerNum as prefixes instead of exact strings to match those
scenarios.
This is a temporary fix; we should review callerNum generation in the
future (use Caller-Id-Name in FSESL), as well as stop relying on it
for session matching (use UUIDs and/or client session numbers instead).
Both of these are more involved changes, though.