Changes enableWakeLock => wakeLock.enabled to follow the pattern used by
other features and wakeLockEnabled => wakeLock. Also make the necessary
code changes to reference this new settings flags correctly.
Adds wake lock feature, which is available only for mobile users using
browser with support to the given API.
When an user using a supported device joins the meeting, a toast is
displayed offering to activate the feature.
Adds a toggle under settings menu to activate/deactivate it.
Adds a music player for ambient sound which can be turned on and off using a
toggle located inside timer panel. When stopwatch is selected, this toggle is
automattically turned off.
The initial goal is for this to be default in 2.7.
Set it as default early in the cycle so folks can test it for longer.
If there's any deal-breaking issue with it nearing release we can just flip back to the old (2.6, FS/SIP.js) default.
Enables the presenter to share a camera in the presentation area.
The shared camera automatically uses a pre-defined, fixed and hidden camera.
Profile defined in the settings.yml file.
It is currently using the screenshare's backend.
Interactions button is a button that stays in action bar.
Integrates several features in just one place: user-reaction, raise hand,
user-status (away/not away), toggle question panel.
add user-reaction collection
add emoji picker for user reaction in the user list
add options to enable/disable user-reaction
add a way to pass style to emoji-picker component
There are still a bunch of edge cases and issues with reconnection
scenarios for video:
- Signaling socket refuses to reconnect once maxRetries expire
- Race conditions on local stream attachment: local camera wouldn't be
correctly rendered _if_ the attached stream existed _without_ video
tracks yet
- Video tracks leak on local streams when replacing them (virtual bgs)
- Completely ignoring Meteor state when trying to reconnect cameras
- Streams aren't proactively stopped when the signaling socket dies
- Outbound request queues aren't isolated by stream nor are they
flushed when a newer peer with the same ID is created
- Server originated negotiation errors won't trigger a local peer
cleanup - thus leaving dangling peers that take way too long to
reconnect
This commit fixes or improves all of the aforementioned issues, +:
- Remove unused arguments in the peer (client->SFU) 'start' request
- Prevent crashes when trying to render video-list-items without user
data (which might happen on re-connections)
Reconnection timers are far too long for abrupt failures because we
are waiting the original timeouts to elapse (30-60s) before trying it
again - even if a connection worked N-sessions back in that session's
history. The ideal thing to have is another intermediate, smaller and
fixed reconnection timer for sessions that had a working screen share
at least once.
The UI is also not being updated to the reconnecting state on negotiation
failures.
* Add an intermediate reconnection timer for abrupt failures set to 8s.
This should improve reconnection times.
* Lower default connection timers values (base 20s down from 30s, max
25s down from 60s)
* Set screen share UI to reconnecting on abrupt failures as well - we
were only tracking ICE states prior to this, not negotiation errors
video-provider's current ping-pong is as good as nothing in 2.5+. We
were counting on Meteor's (and consequently the component's mount state)
before 2.5 to act as a "heartbeat" as far as the socket is concerned.
The ping-pong served only to sustain traffic for finnicky,
traffic-dependant firewall.
Since 2.5, the component's state is _kind of_ detached from Meteor's -
which means it won't unmount when Meteor disconnects. That causes the
video-provider websocket to lose its borrowed heartbeat and leads to a
bunch of reconnectiong inconsistencies, the worst of them being a stuck,
useless signaling socket that will cause cameras not to work until a
client refresh.
This commit does the following:
- Implements actual heartbeat checks to trigger signaling socket
reconnects when necessary, all within the scope of video-provider
- Remove borked, eons old 'offline'/'online' event handlers: they were
causing unnecessary camera drops AND causing video-provider to
generate a stuck signaling socket
- Properly catch WebSockets.send errors
There may be other bridges may not need to force relay traffic on
firefox as @prlanzarin pointed out. So set the default to a
configuration that works out of the box but leave other choices for the
operator.
The option is moved from kurento namespace to media next to the general
forceRelay option.
Firefox has a buggy ICE implementation and needs WebRTC media traffic to
be routed through a turn server to work reliably with mediasoup.
Use the information fetched by the STUN API to determine if the operator
has configured a turn server. If there is one force firefox to use it.
Closes#16164
When BBB is run as a single node or in a scaleout setup with a cluster
proxy in front (see https://docs.bigbluebutton.org/admin/clusterproxy.html) it
is useful to store client settings in browser localStorage instead of
sessionStorage. If localStorage is configured then the client will keep seetings
like notifications for user joining, chat etc across meetings.
It is not advisable to set the setting to `local` in a setup of multiple BBB
nodes without a cluster proxy in front of it because this would lead to
unexpected behaviour at users. The browser would store settings for each server
and for users it would look like BBB is sometimes store the settings and
sometimes not.
It adds the new setting
```yaml
public:
app:
userSettingsStorage: (session|local)
```
RTCRTPSender exposes DSCP marking via `networkPriority` in the encodings
configuration dictionaries. That should allow us to control
QoS priorities for different media streams, eg audio with higher network
priority than video. The only browser that implements that right
now is Chromium.
To use this, the public.app.media.networkPriorities configuration in
settings.yml. Audio, camera and screenshare priorities can be controlled
separately. For further info on the possible values, see:
- https://www.w3.org/TR/webrtc-priority/
- https://datatracker.ietf.org/doc/html/rfc8837#section-5
Mobile endpoints are flaky with the WebSpeechAPI:
- iOS versions that support it are borking our outbound audio when it's
enabled
- Android speech recognition has flaky locale detection and speech
transcription
Additionally: the support check is not checking the WebSpeechAPI
availability properly, so older devices (eg iOS 12) are flagged as
supported even though they aren't.
This commit adds a configuration flag (public.audioCaptions.mobile) to
control transcription availability on mobile. False by default.
Also extends the setSpeechVoices support check and
hasSpeechRecognitionSupport method to prevent false positives.
Adds two new flags to the settings file which change the way the locale
flag is used:
- forceLocale: (true/false) => If true, enforces the transcription
language to be the locale content field and jumps the language
selector
in audio modal.
- defaultSelectLocale: (true/false) => If true, the default selected
value in the dropdown language selector in audio modal will be defined
by the locale content field.
In any case, if the locale flag holds an invalid value, it defaults to
disabled.
Move the language collection to the HTML settings file. This data defines
the available languages available for the speech API.
These language tags are used to filter SpeechSynthesis' API `getVoices`
result. Tags must use BCP 47 format.
https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisVoice/lang
FreeSWITCH has mDNS resolution capabilities as of 1.10.7. Having the filtering
configurable in the client allows us to field trial whether we should keep that
on or off. The default is still to filter them out because FreeSWITCH does not
resolve mDNS candidates by default (ice_resolve_candidate in switch.conf.xml).
- Remove the old listen only bridge (kurento.js), superseded by the equivalent
and equally stable (AS FAR AS LISTEN ONLY IS CONCERNED) sfu-audio-bridge
- Rename FullAudioBridge.js -> sfu-audio-bridge.js
* A more generic name that better represents the capabilities and
the nature of the bridge
* The bridge name identifier in configuration is still the same
('fullaudio')
- Remove the FreeSWITCH listen only fallback
- Temporarily disable the "trickle ICE" pair gathering feature used
in SIP.js (which was always experimental, nonstandard and disabled
by default)
- Updates to settings.yml keys in places where relevant
mediasoup is the default media server in v2.5, so we don't need ICE candidates
to be signaled to bbb-webrtc-sfu. Disabling it saves resources (client and server).
public.media.showVolumeMeterInSettings => public.media.showVolumeMeter
public.media.simplifiedEchoTest => public.media.localEchoTest.enabled
Initial hearing state can be configured in public.media.localEchoTest.initialHearingState
New features:
- A simplified echo test mode that only does a local loopback (instead of
going to FS and back)
- A volume meter for microphone streams to the AudioSettings view
Those two features are experimental and disabled by default; see
public.app.media.simplifiedEchoTest and public.app.media.showVolumeMeter configs
Collateral changes:
- fix: localize fallback device strings in AudioSettings/DeviceSelector
- Refactor on some media stream utils to be re-usable across components
- Refactor in AudioSettings to keep gUM #uses stable.
* TODO: need to pass streams through AudioManager to avoid the surplus gUM.
- fix(audio): drop ScriptProcessorNode usage (deprecated)
* Used in volume meter for tracking - use hark instead