The refactoring in 838accf015 incorrectly
replaced the wrong parseMessage function in addBulkGroupChatMsgs.js
This bug is only triggered when the option public.chat.bufferChatInsertsMs != 0.
Audio state callback and remote media setup both depend on FS's state
(comes through Meteor) and the ICE state (local, peer connection). The
caveat: FS's state can come delayed on reconnection scenarios because
Meteor's websocket generally takes significantly longer to re-connect than
the peer connection, which means the ICE state gets completed way before FS
is flagged as ready.
The practical issue: while outbound audio (client -> FS) will work, inbound
audio (FS -> client) won't _just because it wasn't played_ (even though
data is coming through).
This commit decouples the remote media setup step from the state
through:
- Setup remote media when ICE state is completed
- Run the state callback only after FS is flagged as ready. This
should maintain the UI states consistent across client-server.
Keep in mind the assumption that if FS is ready, ICE is completed by
consequence.
There's an edge case in finnicky networks where ALG-like firewalls
tamper with USE-CANDIDATE STUN packets and, consequently, bork ICE-lite
connectivity establishment. The odd part is that client-side gathering
seems to complete if intermediate STUN bindings work (before the final
USE-CANDIDATE), which may cause the peer not to generate relay
candidates == connectivity fails.
This adds the `public.kurento.gatheringTimeout` option to forcefully extend
the candidate gathering window in peers that act as offerers. The
behavior is as follows: if the flag is set (ms), the peer will wait
either the gathering completed stage or, _at most_,
public.kurento.gatheringTimeout ms before proceeding with calls chained
to setLocalDescription.
This option is disabled by default and intentionally ommited from the
base settings.yml file as to not encourage its use. Don't use it unless
you know what you're doing :).
Reconnection timers are far too long for abrupt failures because we
are waiting the original timeouts to elapse (30-60s) before trying it
again - even if a connection worked N-sessions back in that session's
history. The ideal thing to have is another intermediate, smaller and
fixed reconnection timer for sessions that had a working screen share
at least once.
The UI is also not being updated to the reconnecting state on negotiation
failures.
* Add an intermediate reconnection timer for abrupt failures set to 8s.
This should improve reconnection times.
* Lower default connection timers values (base 20s down from 30s, max
25s down from 60s)
* Set screen share UI to reconnecting on abrupt failures as well - we
were only tracking ICE states prior to this, not negotiation errors
The reconnect routine is stopping for viewers if a broker cannot
re-connect in the first try. That is wrong: viewers should try to
reconnect as long as there'sigaling data that mandates so.
The reconnect trigger is changed from broker's started attribute to the
presence of a scheduled reconnection timeout - if there isn't one (not
schedule), always re-schedule it.