Adds two new messages which clear the users emoji statuses. This messages
enables this task to be done in a single call, instead of triggering one
method call for each user.
'ClearAllUsersEmojiCmdMsg' is sent from meteor to akka and updates all the
emoji states in the users model.
'ClearedAllUsersEmojiEvtMsg' is sent from akka to meteor. This triggers the
mongo collection update.
This commit makes the messages of the timer feature to be proxied by
akka-apps and also adds a timer model that is updated based on these
messages.
Moving the timer panel opening logic to the timer button component in
the navigation panel was a consequence of these changes.
Changes the timer auto-stop threshold from 90% to 100% of user count
to prioritize the timer alert sounding for all users over the timer state
being consistent.
Also, puts the user count fetch back to where it was to avoid a race
condition where the number of users when setting the observer is
different than when the timer ends(i.e. users've joined or left the
meeting while the timer was running) causing the timer not to stop or
stop prematurely.
There was a legacy attribute being used to find active users in the meeting.
This wrong attribute caused the returned number of users to be 0 which
makes the timer stop prematurely and, possibly, not to issue the timer beep.
Also adds a missing argument to updateTimer and moves the find call to
the Users collection to a external function, so it doesn't get executed
every time an user notifies timer has ended.
Filters users collection by 'online' connection status and describes when/how
server detects that the timer has ended to automatically stop it.
Also, fixed a corner case that when timer alarm was disabled, clients didn't
notify that the timer ended.
When user enters meeting with music already playing, an event listener is set
to play the music only after user interaction.
Also, to keep timer more cohesive and reduce complexity of the condition for
playing music, timer automatically stops after 90% of users notify that the
timer has expired.
Adds a music player for ambient sound which can be turned on and off using a
toggle located inside timer panel. When stopwatch is selected, this toggle is
automattically turned off.
Turns the screenshare component into a generic component, so that it can be
used both for screenshare and camera as content fetures.
Also changes specific locales and icons for the camera as content feature.
Interactions button is a button that stays in action bar.
Integrates several features in just one place: user-reaction, raise hand,
user-status (away/not away), toggle question panel.
add user-reaction collection
add emoji picker for user reaction in the user list
add options to enable/disable user-reaction
add a way to pass style to emoji-picker component
Fixes a case where the locale selector don't show up in Chrome when using
'webspeech' provider.
And adds missing fields to the webspeech transcription messages, after the
addition of some new parameters to those messages with the open
transcription server.
The current Vosk CC provider does not support stereo mic streams
(pending investigation as to why).
This commits makes sure stereo is forcefully disabled via SDP munging
only when transcription is active and using Vosk. Having it disabled
in the server side (FreeSWITCH) is not enough because the stereo parameter
is client mandated and replicated by FS on its answer. So we need to
make sure it's always disabled for the time being.
SFU audio does munging server side (and stereo is always off), so no changes
needed there.
The rest of the providers (except WebSpeech) need to be validated against
stereo audio as well.
This is also intended to be temporary - ideally this needs to be fixed in
mod_audio_fork/Vosk/wherever this is breaking.
Audio state callback and remote media setup both depend on FS's state
(comes through Meteor) and the ICE state (local, peer connection). The
caveat: FS's state can come delayed on reconnection scenarios because
Meteor's websocket generally takes significantly longer to re-connect than
the peer connection, which means the ICE state gets completed way before FS
is flagged as ready.
The practical issue: while outbound audio (client -> FS) will work, inbound
audio (FS -> client) won't _just because it wasn't played_ (even though
data is coming through).
This commit decouples the remote media setup step from the state
through:
- Setup remote media when ICE state is completed
- Run the state callback only after FS is flagged as ready. This
should maintain the UI states consistent across client-server.
Keep in mind the assumption that if FS is ready, ICE is completed by
consequence.
The voice user ejection callback tethered to Meteor's socket
disconnection seems broken (since its introduction). The VU selector
uses an invalid field (requesterUserId) - so no VU is ever returned.
Since I'm unaware of the original goal behind this code and there's
already ejections in place in other components (akka-apps, for
instance), this is basically a revert of #9888.
There's an edge case in finnicky networks where ALG-like firewalls
tamper with USE-CANDIDATE STUN packets and, consequently, bork ICE-lite
connectivity establishment. The odd part is that client-side gathering
seems to complete if intermediate STUN bindings work (before the final
USE-CANDIDATE), which may cause the peer not to generate relay
candidates == connectivity fails.
This adds the `public.kurento.gatheringTimeout` option to forcefully extend
the candidate gathering window in peers that act as offerers. The
behavior is as follows: if the flag is set (ms), the peer will wait
either the gathering completed stage or, _at most_,
public.kurento.gatheringTimeout ms before proceeding with calls chained
to setLocalDescription.
This option is disabled by default and intentionally ommited from the
base settings.yml file as to not encourage its use. Don't use it unless
you know what you're doing :).
Reconnection timers are far too long for abrupt failures because we
are waiting the original timeouts to elapse (30-60s) before trying it
again - even if a connection worked N-sessions back in that session's
history. The ideal thing to have is another intermediate, smaller and
fixed reconnection timer for sessions that had a working screen share
at least once.
The UI is also not being updated to the reconnecting state on negotiation
failures.
* Add an intermediate reconnection timer for abrupt failures set to 8s.
This should improve reconnection times.
* Lower default connection timers values (base 20s down from 30s, max
25s down from 60s)
* Set screen share UI to reconnecting on abrupt failures as well - we
were only tracking ICE states prior to this, not negotiation errors
The reconnect routine is stopping for viewers if a broker cannot
re-connect in the first try. That is wrong: viewers should try to
reconnect as long as there'sigaling data that mandates so.
The reconnect trigger is changed from broker's started attribute to the
presence of a scheduled reconnection timeout - if there isn't one (not
schedule), always re-schedule it.
Mostly benign, but exitAudio/forceExitAudio was throwing an unhandled
error when called on sessions with no active audio because the
underlying bridge methods did not check whether there was an active
session to stop beforehand.
Several improvements to tldraw whiteboard:
- Only send the shape diff on shape updates (reduce a lot the message traffic)
- Shape permissions (don't allow others to select/edit unless you are presenter/moderator)
- This required some changes in akka model
- Tldraw state patch changes to improve stability with fast updates (fix several crashes)
We were calling upsert in the Annotations collection for the same annotation in all frontend instances, this could lead to the same annotation being inserted
multiple times with different ids due to concurrency.
Added the html5InstanceId of the original request to the redis message so we can use it to only call upsert in one instance.
There are some situations where previously set deviceIds (
local/session storage) may become stale. This causes an unexpected
behavior where audio is temporarily borked until the user clears their
local storage.
This issue has been seen more recently on Safari endpoints when switching
back-and-forth breakout rooms in environments running under iframes.
Also seen randomly on endpoints with virtual input devices.
This centralizes audio gUM calling into a single method that retries the
gUM procedure without pre-set deviceIds only if the initial call fails
due with an OverconstrainedError - hopefully circumventing the issue.
There's no rollback procedure in case a device switch fails right now,
nor does the code entrypoints that call the switching procedures wait
for resolution or failure before marking the new device as chosen. That
may cause inconsistent states in a couple of ways:
- No rollback: switch fails, audio is still on but no actual
microphone input is being transmitted
- Not waiting for resolutions: inconsistent chosen devices on failures
Device switching errors are also not surfaced to the end user
This commit:
- Adds device rollback and proper resolution/failure response
awaits to try and make the state a bit more consistent.
- Centralizes the input device switching code to be reused between
different bridges
- Centralizes device ID state management in audio-manager to try and
mantain them a bit more consistent across the board
- Surface device switching failures to the end user
- Guarantee device IDs are set to the session storage on all
appropriate scenarios
- Return to the ResizeAndMoveSlide event to do pan&zoom, respecting the viewed width and height ratio
- Defaults zoom in toolbar to 100% like before to be more consistent
- Fit to width and Reset Zoom is back (fit tho width still has some sync problems)
- Fix to not change to first page when presenter reloads page
RTCRTPSender exposes DSCP marking via `networkPriority` in the encodings
configuration dictionaries. That should allow us to control
QoS priorities for different media streams, eg audio with higher network
priority than video. The only browser that implements that right
now is Chromium.
To use this, the public.app.media.networkPriorities configuration in
settings.yml. Audio, camera and screenshare priorities can be controlled
separately. For further info on the possible values, see:
- https://www.w3.org/TR/webrtc-priority/
- https://datatracker.ietf.org/doc/html/rfc8837#section-5
Move the language collection to the HTML settings file. This data defines
the available languages available for the speech API.
These language tags are used to filter SpeechSynthesis' API `getVoices`
result. Tags must use BCP 47 format.
https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisVoice/lang
Add a server-side app for the audio captions feature and record proto-events
for this data.
As it is, only behaves as a pass-through module. The idea is to include all
the business intelligence in this app.
There's a VoiceUser cleanup procedure bound to the User's cleanup
routine in Meteor's server-side. That cleanup is _silent_ and does not
use a dedicated modifier from voice-user et al, which is not
straightforward and might waste a few minutes of understanding what's
happening when debugging audio collections.
This commit centralizes that cleanup in a new clearVoiceUser modifier in
voice-user as well as logs when it works.
Sometimes the handler that listens for the state change in the callState is
not updated correctly.
In these rare cases, the state of the callstate changes directly to in_conference,
not taking the expected path: call_started -> in_echo_test -> in_conference
Fixes a case when the presentation is just uploaded and a wrong initial zoom was set.
Also fix viewer zoom not correclty adjusting to the area size when zoomed out.
There are scenarios where the full audio broker (SFU) stop procedure
may be called multiple times in a very short timestamp - eg a concurrent
stop + connection failure; a timeout in the transfer procedure + a
reconnect attempt, [...]. When that happens, calls to exitAudio may throw
errors if the broker was already released - and that's not the expected
behavior.
If a viewer session failed mid-call, it was being scheduled for a reconnect via
the min-max connection timers (30s-60s), which is terrible UX.
This commit makes screen sharing viewers try to reconnect immediately when
appropriate.
Outbound/presenter screen sharing reconnect was broken from inception, so it's
being removed until it´s properly re-implemented.
This also fixes an issue where presenter disconnections would be silent for the
end user - now an error toast is shown and the error properly logged.
Fixes an issue where subsequent failures might lead to wrong error codes being
reported;
Splits the screen sharing bridge stop method into a reconnect-safe version and
a public one - should also address some quirks with inbound stream reconnection.