In order to be able to control better which branches are actually used
for building the docs, this commit changes the logic of the script to
only add manually specified versionsi to the docs.
There are still a bunch of edge cases and issues with reconnection
scenarios for video:
- Signaling socket refuses to reconnect once maxRetries expire
- Race conditions on local stream attachment: local camera wouldn't be
correctly rendered _if_ the attached stream existed _without_ video
tracks yet
- Video tracks leak on local streams when replacing them (virtual bgs)
- Completely ignoring Meteor state when trying to reconnect cameras
- Streams aren't proactively stopped when the signaling socket dies
- Outbound request queues aren't isolated by stream nor are they
flushed when a newer peer with the same ID is created
- Server originated negotiation errors won't trigger a local peer
cleanup - thus leaving dangling peers that take way too long to
reconnect
This commit fixes or improves all of the aforementioned issues, +:
- Remove unused arguments in the peer (client->SFU) 'start' request
- Prevent crashes when trying to render video-list-items without user
data (which might happen on re-connections)
Reconnection timers are far too long for abrupt failures because we
are waiting the original timeouts to elapse (30-60s) before trying it
again - even if a connection worked N-sessions back in that session's
history. The ideal thing to have is another intermediate, smaller and
fixed reconnection timer for sessions that had a working screen share
at least once.
The UI is also not being updated to the reconnecting state on negotiation
failures.
* Add an intermediate reconnection timer for abrupt failures set to 8s.
This should improve reconnection times.
* Lower default connection timers values (base 20s down from 30s, max
25s down from 60s)
* Set screen share UI to reconnecting on abrupt failures as well - we
were only tracking ICE states prior to this, not negotiation errors
The media monitor responsible for triggering the reconnecting view in
the screen sharing component was maintaing the previous state (eg
flowing) in cases where the peer just failed before media stopped
flowing. That triggered an error in the bps calculations that caused the
previous state to be preserved - eg stuck in flowing while it should be
not_flowing.
These changes make it so that if there's not peer to fetch stats from,
them the bps calculations will correctly return 0 (which translates to
not_flowing).
The reconnect routine is stopping for viewers if a broker cannot
re-connect in the first try. That is wrong: viewers should try to
reconnect as long as there'sigaling data that mandates so.
The reconnect trigger is changed from broker's started attribute to the
presence of a scheduled reconnection timeout - if there isn't one (not
schedule), always re-schedule it.
Same rationale as in video-provider's commit
(34fa37ae4f092af4a5aef0cf01d96c033d97473c).
This commit does the following:
- Implement actual heartbeat checks to trigger reconnects when
necessary
- Properly catch and log WebSocket.send errors
video-provider's current ping-pong is as good as nothing in 2.5+. We
were counting on Meteor's (and consequently the component's mount state)
before 2.5 to act as a "heartbeat" as far as the socket is concerned.
The ping-pong served only to sustain traffic for finnicky,
traffic-dependant firewall.
Since 2.5, the component's state is _kind of_ detached from Meteor's -
which means it won't unmount when Meteor disconnects. That causes the
video-provider websocket to lose its borrowed heartbeat and leads to a
bunch of reconnectiong inconsistencies, the worst of them being a stuck,
useless signaling socket that will cause cameras not to work until a
client refresh.
This commit does the following:
- Implements actual heartbeat checks to trigger signaling socket
reconnects when necessary, all within the scope of video-provider
- Remove borked, eons old 'offline'/'online' event handlers: they were
causing unnecessary camera drops AND causing video-provider to
generate a stuck signaling socket
- Properly catch WebSockets.send errors
The stream state change handler in video-list-item is using a component
state reference inside a DOM event callback - which means it is always
presuming `isStreamHealthy` is false (initial value). That prevents the
health state from actually transitioning when necessary (and
consequently rendering the reconnecting view in video-list item).
This commit removes the state-based transition check in the state change
handler and unifies the reconnecting view to use the username
placeholde (replacing the loading spinners).