Only smart layout takes screen sharing/external video states in account
when populating its initial state. The others don't, and that causes
some weird issues when switching back-and-forth between layout types due
to their input states becoming inconsistent - ie having an active screen
sharing and transitioning from Smart -> Custom would mark it as false
(due to its absence from the initial state) and pollute its state for
subsequent layouts.
This commit guarantees those features are taken into account when
populating initial input states for Focus On*/Custom layouts.
Mobile endpoints are flaky with the WebSpeechAPI:
- iOS versions that support it are borking our outbound audio when it's
enabled
- Android speech recognition has flaky locale detection and speech
transcription
Additionally: the support check is not checking the WebSpeechAPI
availability properly, so older devices (eg iOS 12) are flagged as
supported even though they aren't.
This commit adds a configuration flag (public.audioCaptions.mobile) to
control transcription availability on mobile. False by default.
Also extends the setSpeechVoices support check and
hasSpeechRecognitionSupport method to prevent false positives.
Adds two new flags to the settings file which change the way the locale
flag is used:
- forceLocale: (true/false) => If true, enforces the transcription
language to be the locale content field and jumps the language
selector
in audio modal.
- defaultSelectLocale: (true/false) => If true, the default selected
value in the dropdown language selector in audio modal will be defined
by the locale content field.
In any case, if the locale flag holds an invalid value, it defaults to
disabled.
Move the language collection to the HTML settings file. This data defines
the available languages available for the speech API.
These language tags are used to filter SpeechSynthesis' API `getVoices`
result. Tags must use BCP 47 format.
https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisVoice/lang
Avoid enable audio transcription if the browser's vendor does not provide
voices data.
This should prevent false positives for browsers such as Chromium and
Brave.
Parse the audio transcript before broadcasting it's content back to the
client and the recording actor. Limiting by 8 words per line and max of
2 lines to avoid CPU intensive operations over this recurring event.
Replace Calibri font family with Verdana to improve character spacing,
add relative sizing to the text content and a background padding.
Add a server-side app for the audio captions feature and record proto-events
for this data.
As it is, only behaves as a pass-through module. The idea is to include all
the business intelligence in this app.