- Move askForConfirmationOnLeave into AudioContainer
- Get rid of the unstable useMuteMicrophone hook, which returns a new reference every time the user gets muted/unmuted. Use the stable useToggleVoice hook instead.
The isSharing var is content type agnostic, so it's picking up camera as
content to flag the actions-bar button loading state.
Change the loading flag to track isScreenBroadcasting
(contentType=screen, local || global) and isScreenGloballyBroadcasting
(contentType=screen, global only). Fixes the camera as content false
positive as well as the loading state itself.
When going from "no mic" -> mic via the unmute action, the client isn't
unmuting itself after confirming the change. This is caused by not
waiting the liveChangeInputDevice method (which is a Promise) to be
fully executed before unmounting the AudioSettings modal -- the one
responsible for triggering the unmute. Since it unmounts before the
device is changed, the unmute action will be ignored because the device
is still "listen-only" (no mic).
Properly unmute audio when transitioning from "no mic" -> "mic" via the
unmute trigger by waiting for liveChangeInputDevice to resolve.
Additionally, some general improvements to UI/UX:
- Display the AudioSettings modal title when gUM is on prompt mode
- Add specific subtitles to the AudioSettings modal to 1) warn that no
mic is selected 2) Give a hint that the user can test their devices
- Always honor settings.yml's "initialHearingState" state (whether
local echo feedback should be played by default in AudioSettings)
We are missing a way to select transcription languages in some
scenarios, e.g.: listenOnlyMode=false. The audio settings UI is also not
handling item disposition very well on smaller devices.
This commit does the following to improve those blind spots:
- Add the transcription language selector to it whenever applicable
- Add proper styling to the transcription selector
- Handle small screens by changing the disposition of elements to
portrait mode
- Improve how elements are disposed to a more familiar view: Mic ->
Activity Indicator; Speaker -> Speaker test. This is more in line
with how other platforms do audio configuration/pre flight screens.
This is a rework of the audio join procedure whithout the explict listen
only separation in mind. It's supposed to be used in conjunction with
the transparent listen only feature so that the distinction between
modes is seamless with minimal server-side impact. An abridged list of
changes:
- Let the user pick no input device when joining microphone while
allowing them to set an input device on the fly later on
- Give the user the option to join audio with no input device whenever
we fail to obtain input devices, with the option to try re-enabling
them on the fly later on
- Add the option to open the audio settings modal (echo test et al)
via the in-call device selection chevron
- Rework the SFU audio bridge and its services to support
adding/removing tracks on the fly without renegotiation
- Rework the SFU audio bridge and its services to support a new peer
role called "passive-sendrecv". That role is used by dupled peers
that have no active input source on start, but might have one later
on.
- Remove stale PermissionsOverlay component from the audio modal
- Rework how permission errors are detected using the Permissions API
- Rework the local echo test so that it uses a separate media tag
rather than the remote
- Add new, separate dialplans that mute/hold FreeSWITCH channels on
hold based on UA strings. This is orchestrated server-side via
webrtc-sfu and akka-apps. The basic difference here is that channels
now join in their desired state rather than waiting for client side
observers to sync the state up. It also mitigates transparent listen
only performance edge cases on multiple audio channels joining at
the same time.
The old, decoupled listen only mode is still present in code while we
validate this new approach. To test this, transparentListenOnly
must be enabled and listen only mode must be disable on audio join so
that the user skips straight through microphone join.
There's a very odd getUserMedia call tucked into the base ErrorScreen.
There's no rationale in either the commit or PR that added them, but the
intention seems to be stopping audio on client crash.
Using getUserMedia like that will have no effect other than an odd
permission prompt on iframe-based environments or a webcam activation
flash after the client crashes.
Remove ErrorScreen's getUserMedia call as well as the HTMLMedia pause
call - both should be handled gracefully by AudioManager's forceExitAudio
triggered by the StopAudioTracks event (also ErrorScreen). If there's an
edge case where it isn't properly stopped, we'll have to tackle it
there.
* Refactor: Make bundle using webpack
* Fix: restore after install codes and a few settings
* Fix: build script folder permission
* Refactor: Remove support to async import on audio bridges
* Upgrade npm using nvm
* Avoid questions on npm ci execution
* Let npm ci install dev dependencies (as we need the build tools here)
* Fix: enconding
* Fix: old lock files
* Remove: bbb-config dependency to bbb-html5 service, bbb-html5 isn't a service anymore
* Fix: TS errors
* Fix: eslint
* Fix: chat styles
* npm install with "lockfileVersion": 3 (newer npm)
* build: allow nodejs 22
* node 22; drop meteor from CI and bbb-conf
* TEMP: use bbb-install without mongo but with node 22 and newer image
* build: relax nodejs condition to not trip 22.6
* build: ensure dir /usr/share/bigbluebutton/nginx exists
* init sites-available/bbb; drop disable-transparent-
* nginx complaining of missing file and ;
* TMP: print status of services
* WIP: tweak nginx location to debug
* Fix: webcam widgets alignments
* akka-apps -- update location of settings.yml
* build: add locales path for nginx
* docs and config changes for removal of meteor
* Fix: build encoding and locales enpoint folder path
* build: set wss url for media
* Add: Enable minimizer and modify to Terser
* Fix: TS errors
---------
Co-authored-by: Tiago Jacobs <tiago.jacobs@gmail.com>
Co-authored-by: Anton Georgiev <anto.georgiev@gmail.com>
Co-authored-by: Anton Georgiev <antobinary@users.noreply.github.com>
* feat(plugins): add chat server command and chat message type `plugin`
This commit adds the required code for the plugins SDK's chat server
command `CHAT_SEND_MESSAGE`, which allows plugins to send chat
messages. Messages sent by plugins are identified by the message
type `plugin` and belong to the user (senderID) of the client that
sent it. Plugin messages are not displayed by the client itself because
these messages are meant to be custom-rendered by plugins, typically by
the plugin that sent them.
* feat(plugins): add message metadata
Plugin name and plugin custom metadata are stored in message's metadata,
so plugins need it to identify messages when applying custom render.
* feat(chat): removes specific code for plugin messages
Removes specific akka messages, handlers and routes for plugin messages
and adds metadata parameter in `GroupChatMsgFromUser`.
* feat(chat): adds option parameter to mutation
Adds optional parameter `metadata` to the already existing mutation
`chatSendMessage` and use this mutation for plugin chat server command.
* feat(chat): rendering of plugin messages
This commit implements the correct rendering of plugin messages, which
is:
- Plugin messages with metadata attribute `custom` set to true are not
rendered by the client, and are meant to be custom-rendered by
plugins.
- Plugin messages with metadata attribute `custom` set to false are
rendered by the client as being sent by the user that triggered it.
* Update sdk version to v0.0.56
* update sdk version to v0.0.57
Jitter evaluation, as an alert trigger, was changed in 3.0 to get the internal
average jitter used in the conn-status component data (which is total jitter
delay divided by jitterbuffer emit events). This was done accidentally and that
metric is _very_ different from the one used in 2.7 (point-in-time jitter from
remote-inbound-rtp/inbound-rtp, highest on the interval, gathered on
/utils/stats.js). The alert thresholds were preserved, which makes it overly
sensitive in regards to jitter (and thus causes it to be critical whenever the
user is in audio).
Remove jitter as a connection status alert trigger, which fixes the
false positive. The implementation on <= 2.7 is also not ideal - if
anything, it generates false negatives. That's why I'm removing jitter for
the time being since it's ill-suited (at least in the way it's used)
for what we want to achieve.
* feat(graphql-server): add new view `v_chat_private_read_feedback`
Adds a view called `v_chat_private_read_feedback` to retrieve the last seen time
of the recipient of a private chat.
* refactor(chat): removes unused prop `lastSeenAt`
* feat(private-chat): message read confirmation feedback
Adds message read confirmation feedback feature to private chats.
This feature uses the private chat recipient's `lastSeenAt' attribute to
check which messages were read. Messages read are show in the chat with
a check icon next to it.
Feature behind a flag in settings.yml, which is disabled by default:
- `public.chat.privateMessageReadFeedback.enabled`
* fix(chat): poll chart message
Fixes poll chart message which was not using the full chat width due to
previous changes in chat messages `flex-direction`.
* fix: adds missing initial value for `privateMessageReadFeedback`
* fix: linter errors
* fix(chat): add `recipientHasSeen` property to existing view
This commit changes the way the messages read by the recipient are
tracked. The previous strategy required the client to calculate the read
messages and as a consequence all messages of the given chat
were re-rendered every time the recipient `lastSeenAt` time
changed. The current strategy consists of calculating the read messages
on the server(based on recipient `lastSeenAt`) and just expose to the
client a boolean(`recipientHasSeen`) for each message that indicates whether
it has already been read or not.
* fix: typo in message description
* fix: typo in settings flag
* fix: vertically align icon
* feat(webcams): skip video preview if valid input devices stored
Additionally:
- refactor: re-use the existing VirtualBackground_* storage info instead
of creating a new one
- fix: store background choices per deviceId instead of globally
- fix: guarantee background restore attempts are *critical* when
video-preview is supposed to be skipped. We want the preview to be
shown if the previous background could not be restored to preserver
the user's privacy choice
- fix: cameras could not be shared if no previous device info was in
the user's session
- fix: uploaded background images were not correctly restored
- fix: do not spin up virtual bg workers for brightness if it has not
been altered by the user
- refactor: remove old video-provider background restore routine,
centralize it in video-preview
* fix(skip-video-preview): correct storage check and add playwright test and docs
---------
Co-authored-by: prlanzarin <4529051+prlanzarin@users.noreply.github.com>