When user enters meeting with music already playing, an event listener is set
to play the music only after user interaction.
Also, to keep timer more cohesive and reduce complexity of the condition for
playing music, timer automatically stops after 90% of users notify that the
timer has expired.
Adds a music player for ambient sound which can be turned on and off using a
toggle located inside timer panel. When stopwatch is selected, this toggle is
automattically turned off.
Turns the screenshare component into a generic component, so that it can be
used both for screenshare and camera as content fetures.
Also changes specific locales and icons for the camera as content feature.
Interactions button is a button that stays in action bar.
Integrates several features in just one place: user-reaction, raise hand,
user-status (away/not away), toggle question panel.
add user-reaction collection
add emoji picker for user reaction in the user list
add options to enable/disable user-reaction
add a way to pass style to emoji-picker component
Fixes a case where the locale selector don't show up in Chrome when using
'webspeech' provider.
And adds missing fields to the webspeech transcription messages, after the
addition of some new parameters to those messages with the open
transcription server.
The current Vosk CC provider does not support stereo mic streams
(pending investigation as to why).
This commits makes sure stereo is forcefully disabled via SDP munging
only when transcription is active and using Vosk. Having it disabled
in the server side (FreeSWITCH) is not enough because the stereo parameter
is client mandated and replicated by FS on its answer. So we need to
make sure it's always disabled for the time being.
SFU audio does munging server side (and stereo is always off), so no changes
needed there.
The rest of the providers (except WebSpeech) need to be validated against
stereo audio as well.
This is also intended to be temporary - ideally this needs to be fixed in
mod_audio_fork/Vosk/wherever this is breaking.
Audio state callback and remote media setup both depend on FS's state
(comes through Meteor) and the ICE state (local, peer connection). The
caveat: FS's state can come delayed on reconnection scenarios because
Meteor's websocket generally takes significantly longer to re-connect than
the peer connection, which means the ICE state gets completed way before FS
is flagged as ready.
The practical issue: while outbound audio (client -> FS) will work, inbound
audio (FS -> client) won't _just because it wasn't played_ (even though
data is coming through).
This commit decouples the remote media setup step from the state
through:
- Setup remote media when ICE state is completed
- Run the state callback only after FS is flagged as ready. This
should maintain the UI states consistent across client-server.
Keep in mind the assumption that if FS is ready, ICE is completed by
consequence.
The voice user ejection callback tethered to Meteor's socket
disconnection seems broken (since its introduction). The VU selector
uses an invalid field (requesterUserId) - so no VU is ever returned.
Since I'm unaware of the original goal behind this code and there's
already ejections in place in other components (akka-apps, for
instance), this is basically a revert of #9888.
There's an edge case in finnicky networks where ALG-like firewalls
tamper with USE-CANDIDATE STUN packets and, consequently, bork ICE-lite
connectivity establishment. The odd part is that client-side gathering
seems to complete if intermediate STUN bindings work (before the final
USE-CANDIDATE), which may cause the peer not to generate relay
candidates == connectivity fails.
This adds the `public.kurento.gatheringTimeout` option to forcefully extend
the candidate gathering window in peers that act as offerers. The
behavior is as follows: if the flag is set (ms), the peer will wait
either the gathering completed stage or, _at most_,
public.kurento.gatheringTimeout ms before proceeding with calls chained
to setLocalDescription.
This option is disabled by default and intentionally ommited from the
base settings.yml file as to not encourage its use. Don't use it unless
you know what you're doing :).
Reconnection timers are far too long for abrupt failures because we
are waiting the original timeouts to elapse (30-60s) before trying it
again - even if a connection worked N-sessions back in that session's
history. The ideal thing to have is another intermediate, smaller and
fixed reconnection timer for sessions that had a working screen share
at least once.
The UI is also not being updated to the reconnecting state on negotiation
failures.
* Add an intermediate reconnection timer for abrupt failures set to 8s.
This should improve reconnection times.
* Lower default connection timers values (base 20s down from 30s, max
25s down from 60s)
* Set screen share UI to reconnecting on abrupt failures as well - we
were only tracking ICE states prior to this, not negotiation errors
The reconnect routine is stopping for viewers if a broker cannot
re-connect in the first try. That is wrong: viewers should try to
reconnect as long as there'sigaling data that mandates so.
The reconnect trigger is changed from broker's started attribute to the
presence of a scheduled reconnection timeout - if there isn't one (not
schedule), always re-schedule it.