* Send custom user data to akka apps
* Add user custom data to registered user
* Add user custom data to user join event
* Store user custom data in Redis
* Rename userCustomData to customParameters
* Rename xml tag to userdata
* Demo changes
* Revert "feat(captions): no longer writes in the pad"
This reverts commit a76de8c458.
* feat(transcriptoin): Add config options for the transcription backend
* feat(transcription): Add autodetect option to cc chevron
* feat(transcription): Move transcription options into settings modal
* feat(transcription): Set transcription options via userdata
* fix(transcription): Correct userdata for settings transcription params
* feat(transcriptions): options to auto enable caption button
* feat(transcriptions): Option to hide old CC pad funcionality
* fix(transcription): Fix PR comments
* fix(transcription): Refactor updateTranscript to prevent null user and make it more readable
* feat(transcription): bbb_transcription_provider can be set via userdata
* fix(transcription): Use base10 for parseInt
* fix(transcriptions): Fix CC language divider when using webspeech
* fix(transcriptions): Use a default pad in the settings instead of hardcoding 'en'
We still need to use a language pad such as 'en', but in the future we can better
separate these systems.
* fix(transcription): Add a special permission for automatic transcription updates to the pad and restore old per user updates permission
* feature(transcriptions): Include transcriptions submenu and locales
* chore: bump bbb-transcription-controller to v0.2.0
* fix(transcription): Add missing menu files
* fix(transcription): Fix transcription provider options in settings.yml
* fix: setting password for bbb-transcription-controller
* build: add gladia-proxy.log for transcription-controller
* fix(transcriptions): Remove transcript splitting and floor logic from akka apps
* fix(captions): Show long utterances as split captions, show multiple speaker captions
* chore: bump bbb-transcription-controller to 0.2.1
---------
Co-authored-by: Anton Georgiev <anto.georgiev@gmail.com>
If the autoplay block is triggered in listen only, the connection timer
keeps ticking even if the user correctly accepts the audio play prompt.
That causes an audio re-connect once the timeout expires.
Clear the connection timer if the audio bridge starts with
NotAllowedError as a soft error. For connection purposes, the audio join
procedure worked. The autoplay thing is at the UI/UX level, not WebRTC.
This is an initial, experimental implementation of the feature proposed in
https://github.com/bigbluebutton/bigbluebutton/issues/14021.
The intention is to phase out the explicit listen only mode with two
overarching goals:
- Reduce UX friction and increase familiarity: the existence of a separate
listen only mode is a source of confusion for the majority of users
Reduce average server-side CPU usage while also making it possible for
having full audio-only meetings.
The proof-of-concept works based on the assumption that a "many
concurrent active talkers" scenario is both rare and not useful. With
that in mind, this including two server-side triggers:
- On microphone inactivity (currently mute action that is sustained for
4 seconds, configurable): FreeSWITCH channels are held (which translates
to much lower CPU usage, virtually 0%). Receiving channels are switched,
server side, to a listening mode (SFU, mediasoup).
* This required an extension to mediasoup two allow re-assigning producers
to already established consumers. No re-negotiation is done.
- On microphone activity (currently unmute action, immediate):
FreeSWITCH channels are unheld, listening mode is deactivated and the
mute state is updated accordingly (in this order).
This is *off by default*. It needs to be enabled in two places:
- `/etc/bigbluebutton/bbb-webrtc-sfu/production.yml` ->
`transparentListenOnly: true`
- End users:
* Server wide: `/etc/bigbluebutton/bbb-html5.yml` ->
`public.media.transparentListenOnly: true`
* Per user: `userdata-bbb_transparent_listen_only=true`
The refactoring in 838accf015 incorrectly
replaced the wrong parseMessage function in addBulkGroupChatMsgs.js
This bug is only triggered when the option public.chat.bufferChatInsertsMs != 0.
SFU based audio is missing connection timers, which means the join
procedure can go on indefinitely in a couple of scenarios.
Refactor the connection timers added for re-connections in the SFU audio
bridge and make them valid for the first try as well.
Make 1010 errors (connection timeout) retriable when retryThroughRelay
is enabled.
1007 errors are still a large fraction of our overall audio join error
rate. This usually indicates some sort of firewall block or UDP issues
carrier networks. I can't figure out why some scenarios won't trickle
down to relay candidates though - I'm leaning to scenarios where STUN
packets with USE-CANDIDATE are being mangled/lost along the way or
something else that borks the (already fragile) conn checks for ICE-lite
implementations.
Add a new feature called retryThroughRelay which triggers a retry with
iceTransportPolicy=relay whenever audio fails to join with a 1007 error.
The goal is to force relay usage to try and bypass 1007s scenarios that
still happen.
Disabled by default.