* Update en.json
* Update settings.yml
* Create transcriptionLocale.ts
* Update component.tsx
* Update component.tsx
* Revert IN -> ID
Because it will be fixed in the main repository
* let -> const message
British -> GB
* Refactor audio captions messages and locales to fix issues reported by typescript code validation
---------
Co-authored-by: Ramón Souza <contato@ramonsouza.com>
* Add: new connection close error messages
* Fix: TS type assertion
* Fix: Restore message description
* Add: Locale for server closed connection event
Creates a new field called 'other' inside the Meeting object, which is
included in the activities JSON file. This field now includes two new
metadata attributes: 'learning-dashboard-learn-more-link' and 'learning-
dashboard-feedback-link'. These attributes define the URLs for the 'Learn more'
and 'Feedback' anchor tags in the learning dashboard page. The URLs may
vary depending on the customer/intitution, hence the need for metadata.
feat(learning-dashboard): Add 'learn more' and 'feedback' phrases and respective links
Adds two new phrase to the learning dashboard page. These phrases have
embeded links that are defined by metadata: 'learning-dashboard-feedback
-link' and 'learning-dashboard-learn-more-link'.
chore: add pt_BR locales to the learning dashboard
Adds pt_BR locales for the new 'feedback' and 'learn more' phrases in the
learnin dashboard page.
fix(learning-dashboard): akka error
* feat(screenshare): Option to show disabled screenshare button for non presenters
* Update bigbluebutton-html5/imports/ui/components/screenshare/service.js
---------
Co-authored-by: Ramón Souza <contato@ramonsouza.com>
* feat(screenshare): add support for troubleshooting links
Adds setting option to specify troubleshooting links to each error code
of screenshare. When a troubleshooting link for the given error exists,
the toast notification about the error is displayed with a 'Learn more'
button that when clicked leads the user to the external link. When there
is no link set for the specific error code, the button is not displayed.
* fix(screenshare): change toast type for error code 1136
Changed toast type from 'error' to 'warning' for error code 1136 when
sharing screen. This adjustment was made because error code 1136 is also
returned when the user cancels screen sharing during the tab selection
process. Displaying an error toast in this situation could cause
unnecessary alarm for users, as they were simply canceling an operation.
* fix(notification): help link button element
Uses the button element instead of a div to display the 'Learn more'
help link button.
---------
Co-authored-by: Carlos Henrique <carloshsc1998@gmail.com>
* feat(layout): add propagation toggle
Transforms the 'update everyone' button in the layout modal into a
toggle, so that presenter get immediate visual feedback of the current
layout propagation setting when the modal is opened.
* fix: update propagation button locale to 'update to everyone'
* test: update layout test
---------
Co-authored-by: Anton B <antonbsa.bck@gmail.com>
UI team suggested a few adjustments to the audio settings modal:
- Larger (24px/1.5rem) margin between content and headers
- Rephrasing of modal title, subtitle and volume indicator label
- Change the "audio feedback" button to an outline or link styled
button (there are currently two primary buttons and we want users to
focus on the "Join audio" one)
Implement the suggested changes. The approach for the audio feedback
button is link-styled.
When going from "no mic" -> mic via the unmute action, the client isn't
unmuting itself after confirming the change. This is caused by not
waiting the liveChangeInputDevice method (which is a Promise) to be
fully executed before unmounting the AudioSettings modal -- the one
responsible for triggering the unmute. Since it unmounts before the
device is changed, the unmute action will be ignored because the device
is still "listen-only" (no mic).
Properly unmute audio when transitioning from "no mic" -> "mic" via the
unmute trigger by waiting for liveChangeInputDevice to resolve.
Additionally, some general improvements to UI/UX:
- Display the AudioSettings modal title when gUM is on prompt mode
- Add specific subtitles to the AudioSettings modal to 1) warn that no
mic is selected 2) Give a hint that the user can test their devices
- Always honor settings.yml's "initialHearingState" state (whether
local echo feedback should be played by default in AudioSettings)
We are missing a way to select transcription languages in some
scenarios, e.g.: listenOnlyMode=false. The audio settings UI is also not
handling item disposition very well on smaller devices.
This commit does the following to improve those blind spots:
- Add the transcription language selector to it whenever applicable
- Add proper styling to the transcription selector
- Handle small screens by changing the disposition of elements to
portrait mode
- Improve how elements are disposed to a more familiar view: Mic ->
Activity Indicator; Speaker -> Speaker test. This is more in line
with how other platforms do audio configuration/pre flight screens.
This is a rework of the audio join procedure whithout the explict listen
only separation in mind. It's supposed to be used in conjunction with
the transparent listen only feature so that the distinction between
modes is seamless with minimal server-side impact. An abridged list of
changes:
- Let the user pick no input device when joining microphone while
allowing them to set an input device on the fly later on
- Give the user the option to join audio with no input device whenever
we fail to obtain input devices, with the option to try re-enabling
them on the fly later on
- Add the option to open the audio settings modal (echo test et al)
via the in-call device selection chevron
- Rework the SFU audio bridge and its services to support
adding/removing tracks on the fly without renegotiation
- Rework the SFU audio bridge and its services to support a new peer
role called "passive-sendrecv". That role is used by dupled peers
that have no active input source on start, but might have one later
on.
- Remove stale PermissionsOverlay component from the audio modal
- Rework how permission errors are detected using the Permissions API
- Rework the local echo test so that it uses a separate media tag
rather than the remote
- Add new, separate dialplans that mute/hold FreeSWITCH channels on
hold based on UA strings. This is orchestrated server-side via
webrtc-sfu and akka-apps. The basic difference here is that channels
now join in their desired state rather than waiting for client side
observers to sync the state up. It also mitigates transparent listen
only performance edge cases on multiple audio channels joining at
the same time.
The old, decoupled listen only mode is still present in code while we
validate this new approach. To test this, transparentListenOnly
must be enabled and listen only mode must be disable on audio join so
that the user skips straight through microphone join.
* feat(graphql-server): add new view `v_chat_private_read_feedback`
Adds a view called `v_chat_private_read_feedback` to retrieve the last seen time
of the recipient of a private chat.
* refactor(chat): removes unused prop `lastSeenAt`
* feat(private-chat): message read confirmation feedback
Adds message read confirmation feedback feature to private chats.
This feature uses the private chat recipient's `lastSeenAt' attribute to
check which messages were read. Messages read are show in the chat with
a check icon next to it.
Feature behind a flag in settings.yml, which is disabled by default:
- `public.chat.privateMessageReadFeedback.enabled`
* fix(chat): poll chart message
Fixes poll chart message which was not using the full chat width due to
previous changes in chat messages `flex-direction`.
* fix: adds missing initial value for `privateMessageReadFeedback`
* fix: linter errors
* fix(chat): add `recipientHasSeen` property to existing view
This commit changes the way the messages read by the recipient are
tracked. The previous strategy required the client to calculate the read
messages and as a consequence all messages of the given chat
were re-rendered every time the recipient `lastSeenAt` time
changed. The current strategy consists of calculating the read messages
on the server(based on recipient `lastSeenAt`) and just expose to the
client a boolean(`recipientHasSeen`) for each message that indicates whether
it has already been read or not.
* fix: typo in message description
* fix: typo in settings flag
* fix: vertically align icon
* feat(html5): initial implementation of Gladia transcriptions to BBB 3.0
* fix(transcription): Add missing locales and fix invalid cc menu key
* fix(bbb-transcription-controller): Bump transcription controller to fix some bugs
* fix: adjust yq syntax for setting fs esl password in transctiption-controller
* fix(transcription): Use newer useSettings format from transcription options
* fix(captions): Correctly use captions settings
---------
Co-authored-by: João Victor <joaovictornunes973@gmail.com>
Co-authored-by: Anton Georgiev <anto.georgiev@gmail.com>
Co-authored-by: Ramón Souza <contato@ramonsouza.com>
The audio troubleshooting modal has very microphone-specific strings,
which might confuse users trying to join listen only.
Review the Help screen so that listen only scenarios are more generic.
As a bonus, review the unknownError locale with a more actionable text.
- Adds a new Help view for unknown error codes
- Correctly detect NotAllowedError (permissions) - they are currently
being treated like unknown errors in the Help modal
- Rephrase NotAllowedError help text; make it more succint and direct
- Rephrase the unknown error help text; make it more succint and direct
- Add error code and message to that view
- Add public.media.audioTroubleshootingLinks to allow referencing KB
links on the Help modal
- See inline docs