Smart layout (et al) presumes screen sharing will always use 100%
width of the media area. That causes cameras to always be positioned on
top, which is not always the optimal position depending on the viewport
and stream aspect ratio/resolution - so space is wasted.
This commit uses the actual screen sharing video size as provided by
HTMLVideo's videoWidth/videoHeight properties. The calculation uses the
same logic as the one used for presentation/slides, which should make it
a bit familiar.
There's also a handler for HTMLVideo's `resize` event for those browsers
that support it - which enables handling of variable-sized screen
sharing streams. That handler is debounced at 500 ms to prevent
excessive CPU use.
Extra testing is needed with the widest range possible of
browsers/environments and feature combinations.
Only smart layout takes screen sharing/external video states in account
when populating its initial state. The others don't, and that causes
some weird issues when switching back-and-forth between layout types due
to their input states becoming inconsistent - ie having an active screen
sharing and transitioning from Smart -> Custom would mark it as false
(due to its absence from the initial state) and pollute its state for
subsequent layouts.
This commit guarantees those features are taken into account when
populating initial input states for Focus On*/Custom layouts.
Mobile endpoints are flaky with the WebSpeechAPI:
- iOS versions that support it are borking our outbound audio when it's
enabled
- Android speech recognition has flaky locale detection and speech
transcription
Additionally: the support check is not checking the WebSpeechAPI
availability properly, so older devices (eg iOS 12) are flagged as
supported even though they aren't.
This commit adds a configuration flag (public.audioCaptions.mobile) to
control transcription availability on mobile. False by default.
Also extends the setSpeechVoices support check and
hasSpeechRecognitionSupport method to prevent false positives.
Adds two new flags to the settings file which change the way the locale
flag is used:
- forceLocale: (true/false) => If true, enforces the transcription
language to be the locale content field and jumps the language
selector
in audio modal.
- defaultSelectLocale: (true/false) => If true, the default selected
value in the dropdown language selector in audio modal will be defined
by the locale content field.
In any case, if the locale flag holds an invalid value, it defaults to
disabled.
Move the language collection to the HTML settings file. This data defines
the available languages available for the speech API.
These language tags are used to filter SpeechSynthesis' API `getVoices`
result. Tags must use BCP 47 format.
https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisVoice/lang