* feat(screenshare): Option to show disabled screenshare button for non presenters
* Update bigbluebutton-html5/imports/ui/components/screenshare/service.js
---------
Co-authored-by: Ramón Souza <contato@ramonsouza.com>
Commit 26815f4679 was seemingly lost
during a merge in the 3.0.x-release branch. Nothing breaks, but we're
missing the log info originally added via that commit.
Restore the changes in 26815f4679:
- Add secondsToActivateAudio, inputDeviceId, outputDeviceId and isListenOnly
to audio_joined.extraInfo
- Add inputDeviceId, outputDeviceId and isListenOnly to
audio_failure.extraInfo
- Add a try-catch to the device enforcement procedure triggered by
onAudioJoin - it may throw and block the modal.
We currently use full renegotiation for audio, video, and screen sharing
reconnections, which involves re-creating transports and signaling channels
from scratch. While effective in some scenarios, this approach is slow and,
especially with outbound cameras and screen sharing, prone to failures.
To counter that, WebRTC provides a mechanism to restart ICE without needing
to re-create the peer connection. This allows us to avoid full renegotiation
and bypass some server-side signaling limitations. Implementing ICE restart
should make outbound camera/screen sharing reconnections more reliable and
faster.
This commit implements the ICE restart procedure for all WebRTC components'
*outbound* peers. It is based on bbb-webrtc-sfu >= v2.15.0-beta.0, which
added support for ICE restart requests. This feature is *off by default*.
To enable it, adjust the following flags:
- `/etc/bigbluebutton/bbb-webrtc-sfu/production.yml`: `allowIceRestart: true`
- `/etc/bigbluebutton/bbb-html5.yml`: `public.kurento.restartIce`
* Refer to the inline documentation; this can be enabled on the client side
per media type.
* Note: The default max retries for audio is lower than for cameras/screen
sharing (1 vs 3). This is because the full renegotiation process for audio
is more reliable, so ICE restart is attempted first, followed by full
renegotiation if necessary. This approach is less suitable for cameras/
screen sharing, where longer retry periods for ICE restart make sense
since full renegotation there is... iffy.
Endpoints that are inbound/`recvonly` only (client's perspective) do *not*
support ICE restart yet. There are two main reasons:
- Server-side changes are required to support `recvonly` endpoints,
particularly the proper handling of the server’s `setup` role in the
its SDPs during an ICE restart. These changes are too broad for now,
so they are deferred to future releases (SFU@v2.16).
- Full reconnections for `recvonly` endpoints are currently reliable,
unlike for `send*` endpoints. ICE restarts could still provide benefits
for `recvonly` endpoints, but we need the server updates first.
- We were sending one websocket message for each removed shape, send only one with all IDs.
- The shape limit verification was not always working with rapid updates and if somehow the db got more shapes,
the users couldn't update or delete any shape anymore
- Unnecessary remove shape messages were being sent to the server when going over limit
When a shape is changed, the full shape objcect was being transmitted to the server again.
Do a diff to only send what changed (similarly as it was in tldraw v1) to save upload bw.
TODO:
Draw segments diffs (array) is still not working, so all the segments are still being sent every time.
* Batch shapes and persist on idle or editing states
* add highlight.idle to condition
Co-authored-by: germanocaumo <germanocaumo@gmail.com>
---------
Co-authored-by: germanocaumo <germanocaumo@gmail.com>
Background shape can show white borders due to rounding erros in the tldraw canvas, change size and position of background shape to avoid it.
Also disable tl container outline showing when in focus.
* fix(dark-theme): adjust Dark Reader CSS selectors
Clean up inverted css selectors passed to Dark Reader and add new ones
for elements not correctly transformed to dark theme. These include the
tldraw color picker, text shape color, selected color indicator, tool
opacity slider, and camera dock background.
* Suggestions from review
Co-authored-by: germanocaumo <germanocaumo@gmail.com>
* Suggestions from review
Co-authored-by: germanocaumo <germanocaumo@gmail.com>
* Suggestions from review
Co-authored-by: germanocaumo <germanocaumo@gmail.com>
* changes requested in review
* changes requested in review
---------
Co-authored-by: germanocaumo <germanocaumo@gmail.com>
In BBB 3.0, a change was made to collect full WebRTC stats continuously.
This method gathers stats from *all* peers and *all* senders and receivers
every 2 seconds. Originally, it was intended to run only when the user opened
the connection status dialog, providing in-depth info in the UI and making it
available for copying.
This new behavior is not ideal. Running full stats collection every 2 seconds
in meetings with 20+ peers/transceivers wastes client resources since the
collected data is unused 99% of the time.
This commit reverts to the pre-3.0 behavior (≤2.7), where full stats collection
(`startNetworkMonitoring`) runs only when the connection status modal is open.
As a bonus, it fixes the packet loss status transition log to use the packet
loss percentage, which is the actual trigger metric.
* restores react18 createRoot
* fix slideChange issue - new slide not loading
* fix skip video preview
* test: update screenshare function checks + close notifications
---------
Co-authored-by: Anton B <antonbsa.bck@gmail.com>
When `muteOnStart=true`, the initial local mute state in AudioManager is
desynced from the server. This issue stems from two recent changes:
- Decoupling voice activity updates from the main user_voice subscription,
which introduced an implicit muted state placeholder value
of true instead of false. See user_voice_activity's DB schema
propagation rules.
- Introduction of dialplan-level muteOnStart, muting channels on creation
rather than after.
Without properly updating AudioManager's `isMuted` placeholder, no
user_voice_activity update triggers *when joining audio* with
muteOnStart=true, causing two issues:
- Sender tracks are not locally muted on audio join.
- Opening the audio settings modal while muted will cause the
microphone to be incorrectly *unmuted* once it's closed (first try only).
This fix sets AudioManager's `isMuted` placeholder to true, matching the
server. Additionally:
- Enforce the local mute state before joining audio to ensure the desired
sender track state. Should make this a bit more future proof.
- Track `user_voice_activity` before joining audio (rather than after)
to avoid race conditions.
- Clean up `AudioManager.init` (loadBridges no longer returns a promise etc).
* Translate en.json in ja
100% translated source file: 'en.json'
on 'ja'.
* Translate en.json in ja
100% translated source file: 'en.json'
on 'ja'.
* Translate en.json in ja
100% translated source file: 'en.json'
on 'ja'.
* Translate en.json in ja
100% translated source file: 'en.json'
on 'ja'.
* Translate en.json in ja
100% translated source file: 'en.json'
on 'ja'.
---------
Co-authored-by: transifex-integration[bot] <43880903+transifex-integration[bot]@users.noreply.github.com>
* bad set state (actionsBarContainer)
* bad set state (appContainer)
* isMobile should be ismobile warning
* bad setState (notes)
* bad setState (user-notes)
* bad setState (user-participants-title)
* bad setState (webCamContainer)
* bad setState (PresentationMenuContainer)
* fix webCams not working issue
* fix userList user counter not working issue
* fix TS lint
* fix TS lint
* fix TS lint
* Later changes
Currently, all error boundaries close audio and Apollo connections once
an error is caught. This is not the correct behavior as not all error
boundaries are critical, e.g.: the presentation crashing should _not_
break the whole client. It also deviates from how error boundaries
worked in 2.7
Add a new prop to the ErrorBoundary/LocatedErrorBoundary components
called isCritical that flags an error boundary instance as critical. If
true, it'll close Apollo/audio. The default behavior is
isCritical=false, and the only critical error boundaries are the ones
located in the app's root (/client/main.tsx).
* fix first wheel zoom always going to top left corner
* focus button zoom on center of page after wheel zoom
* test: update zoom test to avoid snapshot miscomparisons when zooming in and out
* test: fix usage of hasText function params
---------
Co-authored-by: Anton B <antonbsa.bck@gmail.com>
* feat(screenshare): add support for troubleshooting links
Adds setting option to specify troubleshooting links to each error code
of screenshare. When a troubleshooting link for the given error exists,
the toast notification about the error is displayed with a 'Learn more'
button that when clicked leads the user to the external link. When there
is no link set for the specific error code, the button is not displayed.
* fix(screenshare): change toast type for error code 1136
Changed toast type from 'error' to 'warning' for error code 1136 when
sharing screen. This adjustment was made because error code 1136 is also
returned when the user cancels screen sharing during the tab selection
process. Displaying an error toast in this situation could cause
unnecessary alarm for users, as they were simply canceling an operation.
* fix(notification): help link button element
Uses the button element instead of a div to display the 'Learn more'
help link button.
---------
Co-authored-by: Carlos Henrique <carloshsc1998@gmail.com>
* feat(layout): add propagation toggle
Transforms the 'update everyone' button in the layout modal into a
toggle, so that presenter get immediate visual feedback of the current
layout propagation setting when the modal is opened.
* fix: update propagation button locale to 'update to everyone'
* test: update layout test
---------
Co-authored-by: Anton B <antonbsa.bck@gmail.com>
Commit 325887e325 split the local echo audio
element from the main audio element to allow concurrent playback without the
risk of interfering with one another.
This introduced a regression where local echo doesn't track output device
changes. The main audio element (i.e. the meeting's audio) is not affected by
this regression.
This commit ensures local echo reacts to output device changes as needed.
Additionally, the mobile app can use this feature to render the whiteboard inside an iframe with the same `userId`.
By setting the parameter `revokePreviousSession=true`, a new `sessionToken` will be generated, and the previous session will be revoked when the new device connects. This is useful for transferring a session to another device and automatically closing the previous session.
In BBB ≤ 2.7, a procedure monitored system audio device changes, updating
the device list and assigning a fallback device if the current one was removed.
This procedure was removed in 3.0 during the migration of the
input-stream-live-selector component to TypeScript (reasons unknown), causing
the device list to become outdated and leaving the user's client without audio
input if their current device is disconnected.
This commit restores the `devicechange` event handler in the input-stream-live-
selector, ensuring that the device list is updated properly and fallback devices
are assigned when necessary.
Adjust an inline comment in connection status' service about packet loss metric
usage.
Now it correctly states that the absolute counter SHOULD NOT be used for
alert triggers.
In 3.0, the packet loss metric used to trigger connection status alerts was
changed to the one generated by the `startMonitoringNetwork` method used by the
connection status modal. Since packet loss thresholds were not adjusted (0.5,
0.1, 0.2), a single lost packet causes the status alert to be permanently
stuck on "critical". This is explained by how different those metrics
are:
- **Before (2.7):** A 5-probe wide calculation of inbound packet loss
fraction based on `packetsLost` and `packetsReceived` metrics.
- **Now (3.0):** An absolute counter of inbound lost packets.
This commit restores the previous packet loss metric used to trigger
connection status alerts, reverting to the original collection method via
`/utils/stats.js`. This resolves the issue, but further work is needed in
subsequent PRs:
- Unify the collection done in `/utils/stats.js` with the
`startMonitoringNetwork` method.
- Incorporate the remote-inbound `fractionsLost` metric to account for packet
loss on both legs of the network (in/out).
- Update the packet loss metric displayed in the connection status modal to
show a more meaningful value (e.g., packet loss percentage over a specific
probe interval). An absolute counter of lost packets isn't useful for end
users.
- Update the alert log to use the fraction or percentage above
UI team suggested a few adjustments to the audio settings modal:
- Larger (24px/1.5rem) margin between content and headers
- Rephrasing of modal title, subtitle and volume indicator label
- Change the "audio feedback" button to an outline or link styled
button (there are currently two primary buttons and we want users to
focus on the "Join audio" one)
Implement the suggested changes. The approach for the audio feedback
button is link-styled.
* The prop presentationIsOpen is marked as required in Presentation(null)
* The prop isPresentationManagementDisabled is marked as required in actions-dropdown(null)
* The prop autoJoin is marked as required in wake-lock(null)
* Al children must have key identifiers (userListParticipants)
* The prop presentationUploadExternalDescription is marked as required in presentation-uploadres
Safari may enter a microphone permission check loop due to buggy behavior
in the Permissions API. When permission isn't permanently denied, gUM
requests fail with a NotAllowedError for a few seconds. During this time,
the permission state remains 'prompt' instead of transitioning to 'denied'
and back to 'prompt' after the timeout.
This leads to an issue where, on retrying while in 'prompt' + blocked,
the client loops through gUM checks via: 1) checking permission in the API,
2) receiving 'prompt', so trying gUM, 3) gUM fails, 4) returning to the
modal and checking permission again because the API still says 'prompt'.
Additionally, the `isUsingAudio` flag incorrectly counts the local echo
test/audio settings modal as "using audio," which toggles the flag on/off,
triggering the useEffect that causes the loop more frequently.
To fix this, remove the unnecessary AudioModal permission check that
causes the loop. Also, exclude "isEchoTest" from the `isUsingAudio` flag.
Firefox incorretly displays placeholder audio device labels in the audio
settings/echo test modal when audio is disconnected. This issue arises
due to two quirks:
- Firefox does not support the 'microphone' query from the Permissions
API, causing a fallback gUM permission check.
- Firefox omits device labels from `enumerateDevices` if no streams
are active, even if gUM permission is granted. This behavior differs
from other browsers and causes our `enumerateDevices` handling to
assume that granted permission implies labels are present. This
failed since we clear streams before resolving the fallback gUM.
We now run an additional `enumerateDevices` call in `AudioSettings` when
a selected input device is defined. This ensures `enumerateDevices` is
re-run when a new stream is active, adding the correct device labels in
Firefox and improving device listings in all browsers. We've also
enhanced error handling in the enumeration process and fixed a false
positive in `hasMicrophonePermission`.
There's a regression in 3.0's I/O device selector where default output
devices are not marked as selected in the input-stream-live-selector
component unless the user explicitly selects them. This issue can also affect
input devices, although less commonly than output due to the system's ability
to infer the selected input device ID after the user joins audio.
When a device is the first in the list and no currentDeviceId is set in
the client, treat the first device returned by enumerateDevices as the
system default and hence selected, in accordance with the "Media Capture
and Streams API", Section 9.2, enumerateDevices algorithm.
* test: update and improve Ask for feedback on logout test - add more steps, check for different buttons, check POST request after sending feedback
* test: add missing data-test prop for sendFeendbackButton
* test: fix sendFeedbackButton data-test
When `listenOnlyMode` is `false` and the audio dialog's "Cancel" action is
clicked, the modal incorrectly re-renders instead of closing. Additionally,
the "Cancel" action is mislabeled as "Back."
This fix ensures the audio dialog closes properly when there are no options
to select (i.e., `listenOnlyMode=false`). The `skipAudioOptions` method is
revised to consider `listenOnlyMode` and ignore the "content" state.
Ignoring the "content" state allows options to be skipped even if a subscreen
is rendered (e.g., returning from the AudioSettings modal). The check for
`content == null` combined with `skipAudioOptions` is only necessary when
rendering the main modal. The `content == null` check has been moved to
the relevant section.
When listen only mode is deactivated and an user joins audio, an incorrect
remount of AudioSettings can trigger a spurious mute toggle. This happens
because AudioManager clears the `isConnecting` flag before setting the
`isConnected` flag. This creates a brief period where audio is flagged as
"disconnected," leading to a remount and unmount cycle that causes unwanted
mute/unmute actions due to AudioSettings' logic of muting/unmuting
active devices.
Ensure the `isConnected` flag is set before clearing the
`isConnecting` flag, preventing audio from being incorrectly flagged as
disconnected.
Prevents changes in the presentation state while sharing media from
updating the presentation last state value. Additionally adds a missing
prop value of generic content state.
Removes the duplicate presentation pile dispatch for external video, as
an identical dispatch runs when the external video component is mounted.
This duplication did not cause any noticeable issues for the user but
resulted in the external video being added to the pile twice.
The plugin loader startup logs aren't following the logger convention,
which makes them hard to work with when post-processing logs.
The appended error message is also not useful since we're logging a
Event variant raw (which either outputs {} or nonsense like { isTrusted:
etc }).
Make the plugin "loaded" and "error" logs adhere to logger conventions.
In the future, the error log could use some tuning - there's no useful
info about root cause here.
- Move askForConfirmationOnLeave into AudioContainer
- Get rid of the unstable useMuteMicrophone hook, which returns a new reference every time the user gets muted/unmuted. Use the stable useToggleVoice hook instead.
The isSharing var is content type agnostic, so it's picking up camera as
content to flag the actions-bar button loading state.
Change the loading flag to track isScreenBroadcasting
(contentType=screen, local || global) and isScreenGloballyBroadcasting
(contentType=screen, global only). Fixes the camera as content false
positive as well as the loading state itself.
When going from "no mic" -> mic via the unmute action, the client isn't
unmuting itself after confirming the change. This is caused by not
waiting the liveChangeInputDevice method (which is a Promise) to be
fully executed before unmounting the AudioSettings modal -- the one
responsible for triggering the unmute. Since it unmounts before the
device is changed, the unmute action will be ignored because the device
is still "listen-only" (no mic).
Properly unmute audio when transitioning from "no mic" -> "mic" via the
unmute trigger by waiting for liveChangeInputDevice to resolve.
Additionally, some general improvements to UI/UX:
- Display the AudioSettings modal title when gUM is on prompt mode
- Add specific subtitles to the AudioSettings modal to 1) warn that no
mic is selected 2) Give a hint that the user can test their devices
- Always honor settings.yml's "initialHearingState" state (whether
local echo feedback should be played by default in AudioSettings)
We are missing a way to select transcription languages in some
scenarios, e.g.: listenOnlyMode=false. The audio settings UI is also not
handling item disposition very well on smaller devices.
This commit does the following to improve those blind spots:
- Add the transcription language selector to it whenever applicable
- Add proper styling to the transcription selector
- Handle small screens by changing the disposition of elements to
portrait mode
- Improve how elements are disposed to a more familiar view: Mic ->
Activity Indicator; Speaker -> Speaker test. This is more in line
with how other platforms do audio configuration/pre flight screens.
This is a rework of the audio join procedure whithout the explict listen
only separation in mind. It's supposed to be used in conjunction with
the transparent listen only feature so that the distinction between
modes is seamless with minimal server-side impact. An abridged list of
changes:
- Let the user pick no input device when joining microphone while
allowing them to set an input device on the fly later on
- Give the user the option to join audio with no input device whenever
we fail to obtain input devices, with the option to try re-enabling
them on the fly later on
- Add the option to open the audio settings modal (echo test et al)
via the in-call device selection chevron
- Rework the SFU audio bridge and its services to support
adding/removing tracks on the fly without renegotiation
- Rework the SFU audio bridge and its services to support a new peer
role called "passive-sendrecv". That role is used by dupled peers
that have no active input source on start, but might have one later
on.
- Remove stale PermissionsOverlay component from the audio modal
- Rework how permission errors are detected using the Permissions API
- Rework the local echo test so that it uses a separate media tag
rather than the remote
- Add new, separate dialplans that mute/hold FreeSWITCH channels on
hold based on UA strings. This is orchestrated server-side via
webrtc-sfu and akka-apps. The basic difference here is that channels
now join in their desired state rather than waiting for client side
observers to sync the state up. It also mitigates transparent listen
only performance edge cases on multiple audio channels joining at
the same time.
The old, decoupled listen only mode is still present in code while we
validate this new approach. To test this, transparentListenOnly
must be enabled and listen only mode must be disable on audio join so
that the user skips straight through microphone join.
Support for bbb-html5's server logger (Winston) was removed in
50d445f026, but configuration for it
remained.
Remove the Winston-based server logger configuration from bbb-html5's
settings.yml. There's no direct alternative since this package hasn't got a
server side component to it - replaced by bbb-graphql-middleware,
bbb-graphql-server and bbb-graphql-actions.
Support for clientLog's "server" target (Meteor) was removed in
50d445f026, but configuration and docs
entries for it remained.
There's no alternative for it, so I'm removing the leftover
configuration and doc entries.
Restore the `lint:file` npm script. Rationale outlined in
- 07960af
- https://github.com/bigbluebutton/bigbluebutton/pull/13870
> The lint:fix hook fixes/alters things. The lint run script runs eslint over the whole root directory.
> I just want to check linting offenses on a per-file/directory basis without having files messed with or needing a fancy IDE.
There's a very odd getUserMedia call tucked into the base ErrorScreen.
There's no rationale in either the commit or PR that added them, but the
intention seems to be stopping audio on client crash.
Using getUserMedia like that will have no effect other than an odd
permission prompt on iframe-based environments or a webcam activation
flash after the client crashes.
Remove ErrorScreen's getUserMedia call as well as the HTMLMedia pause
call - both should be handled gracefully by AudioManager's forceExitAudio
triggered by the StopAudioTracks event (also ErrorScreen). If there's an
edge case where it isn't properly stopped, we'll have to tackle it
there.
* Refactor: Make bundle using webpack
* Fix: restore after install codes and a few settings
* Fix: build script folder permission
* Refactor: Remove support to async import on audio bridges
* Upgrade npm using nvm
* Avoid questions on npm ci execution
* Let npm ci install dev dependencies (as we need the build tools here)
* Fix: enconding
* Fix: old lock files
* Remove: bbb-config dependency to bbb-html5 service, bbb-html5 isn't a service anymore
* Fix: TS errors
* Fix: eslint
* Fix: chat styles
* npm install with "lockfileVersion": 3 (newer npm)
* build: allow nodejs 22
* node 22; drop meteor from CI and bbb-conf
* TEMP: use bbb-install without mongo but with node 22 and newer image
* build: relax nodejs condition to not trip 22.6
* build: ensure dir /usr/share/bigbluebutton/nginx exists
* init sites-available/bbb; drop disable-transparent-
* nginx complaining of missing file and ;
* TMP: print status of services
* WIP: tweak nginx location to debug
* Fix: webcam widgets alignments
* akka-apps -- update location of settings.yml
* build: add locales path for nginx
* docs and config changes for removal of meteor
* Fix: build encoding and locales enpoint folder path
* build: set wss url for media
* Add: Enable minimizer and modify to Terser
* Fix: TS errors
---------
Co-authored-by: Tiago Jacobs <tiago.jacobs@gmail.com>
Co-authored-by: Anton Georgiev <anto.georgiev@gmail.com>
Co-authored-by: Anton Georgiev <antobinary@users.noreply.github.com>
* feat(plugins): add chat server command and chat message type `plugin`
This commit adds the required code for the plugins SDK's chat server
command `CHAT_SEND_MESSAGE`, which allows plugins to send chat
messages. Messages sent by plugins are identified by the message
type `plugin` and belong to the user (senderID) of the client that
sent it. Plugin messages are not displayed by the client itself because
these messages are meant to be custom-rendered by plugins, typically by
the plugin that sent them.
* feat(plugins): add message metadata
Plugin name and plugin custom metadata are stored in message's metadata,
so plugins need it to identify messages when applying custom render.
* feat(chat): removes specific code for plugin messages
Removes specific akka messages, handlers and routes for plugin messages
and adds metadata parameter in `GroupChatMsgFromUser`.
* feat(chat): adds option parameter to mutation
Adds optional parameter `metadata` to the already existing mutation
`chatSendMessage` and use this mutation for plugin chat server command.
* feat(chat): rendering of plugin messages
This commit implements the correct rendering of plugin messages, which
is:
- Plugin messages with metadata attribute `custom` set to true are not
rendered by the client, and are meant to be custom-rendered by
plugins.
- Plugin messages with metadata attribute `custom` set to false are
rendered by the client as being sent by the user that triggered it.
* Update sdk version to v0.0.56
* update sdk version to v0.0.57