* fix(users-context): add missing logs
* fix(user-persistent-data): collection publication selector for viewers
Fixes the collection's selector when publishing it to viewers.
* fix(users-context): correctly add user persistent data
Changes the logic of the add_user_persistent_data action in users
context, so that the user information already in the context is merged
with the new one. Also, do not flip the logged out status of users added
by user_persisted_data anymore.
We should be able to capture WebRTC stats in some form for post-processing
so that it helps on debugging support requests (and other use cases, e.g.:
improving field trial analysis on test servers).
Although much of WebRTC stats information can be gathered via server side
components, none have logs as structured for proper post-processing as
the client logs - so we're going the client route for now.
Capture WebRTC stats information for audio and screen sharing via:
- Audio logCodes: new `stats` extraInfo field
- `audio_joined`
- `audio_failure`
- `sfuaudio_error_retry_through_relay`
- `sfuaudio_error_try_to_reconnect`
- Screen share logCodes: new `stats` extraInfo field
- screenshare_presenter_start_success
- screenshare_viewer_start_success
- screenshare_broker_failure
Additionally, add an option to periodically capture WebRTC stats information
for all relevant peers. This is disabled by default since the log can be
verbose (and, consequentially, network taxing when using external
logging targets). It can be enabled via `public.stats.logMediaStats` in
settings.yml. The default interval is 30s. The periodic log format is as
follows:
- logCode: `mediaStats`
- extraInfo.stats: an aggregated stats object of all peers (equivalent
to the `Copy` function in the Connection Status modal).
We currently use full renegotiation for audio, video, and screen sharing
reconnections, which involves re-creating transports and signaling channels
from scratch. While effective in some scenarios, this approach is slow and,
especially with outbound cameras and screen sharing, prone to failures.
To counter that, WebRTC provides a mechanism to restart ICE without needing
to re-create the peer connection. This allows us to avoid full renegotiation
and bypass some server-side signaling limitations. Implementing ICE restart
should make outbound camera/screen sharing reconnections more reliable and
faster.
This commit implements the ICE restart procedure for all WebRTC components,
based on bbb-webrtc-sfu >= v2.15.0-beta.0, which added support for ICE restart
requests. This feature is off by default. To enable it, adjust the following
flags:
- `/etc/bigbluebutton/bbb-webrtc-sfu/production.yml`: `allowIceRestart: true`
- `/etc/bigbluebutton/bbb-html5.yml`: `public.kurento.restartIce`
* Refer to the inline documentation; this can be enabled on the client side
per media type.
* Note: The default max retries for audio is lower than for cameras/screen
sharing (1 vs 3). This is because the full renegotiation process for audio
is more reliable, so ICE restart is attempted first, followed by full
renegotiation if necessary. This approach is less suitable for cameras/
screen sharing, where longer retry periods for ICE restart make sense
since full renegotation there is... iffy.
* docs: update test links on release notes and spec files
* docs: add tests for 'what's new on 2.7' features
* Update docs/docs/testing/release-testing.md
Co-authored-by: Anton Georgiev <antobinary@users.noreply.github.com>
* test: pass the bbb version in the doc links
---------
Co-authored-by: Anton Georgiev <antobinary@users.noreply.github.com>
FS has an intermittent issue where unmuting a HELD channel sometimes
takes significantly (seconds) longer than usual.
conference <XYZ> unmute <WVU> simply gets stuck with no FS_API response,
which delays the unmute action whenever transparent listen only is
active.
Apparently, unholding the channel PRIOR TO unmuting works around the
issue - at least it could not be reproduced with the scenario at hand.
The unmute API already triggered an unhold in FS internally, which is
the reason why this was not done beforehand. The aforementioned issue is
way worse than an extra "redudant" API call, though.
Always unhold audio channels manually _before_ unmuting.
Transparent listen only is currently only worth it for meetings with a
number of duplex audio channels larger than a certain value (dependant
on system performance). That is due to the fact that global audio
bridges created for the mechanism also use significant CPU (roughly the
same as an unheld duplex channel), which means it's cost is usually
offset only once there are enough potential channels to be held in a
conference.
This commit adds a new optional feature that introduces some dynamicity
for the mechanism: it'll only be triggered after at least
@voiceConf.transparentListenOnlyThreshold number of muted duplex
audio channels are present in a conference.
The default is 0 (always trigger transparent listen only if the general
mechanism is activated).
There's an issue where permission-less sessions of video-preview
fail to change video profiles. Whenever gUM is on prompt mode,
deviceIds are obfuscated, which means getInitialCamera will need to
infer the deviceId based on the current media stream.
Since the virtual bg worker is now called synchronously (e28a595),
it'll be extracted incorrectly from the virtual effect MediaStream
(rather than the original stream) - which causes getInitialCamera to
use the effect's deviceId rather than the original stream's deviceId.
Guarantee that deviceId inference via MediaStreamTrack uses
BBBVideoStream's originalStream (so that virtual effect streams are
bypassed). Also remove the call to updateDeviceId in getInitialCamera
since it's redundant since commit e28a595.
A change in e28a595b52 introduced an issue where the "Share camera as
content" modal always has it's "share" action flagged as disabled. This
is due to a short-circuit introduced in the initial gUM procedure that
does not clear the "disabled" state before exiting.
Properly reset the "disabled" sharing state after the initial gUM in
video-preview when "Share camera as content" is used, thus fixing the
aforementioned issue.
v2.14.1
---
* fix(screenshare): presenter/viewer stop logs on all scenarios
* refactor(screenshare): add presenter data to viewer logs
* refactor(video): add video negotiation and flowing logs
* build(mediasoup): 3.14.9