I struggled a bunch with some change I had done on branch develop
which kept causing the build of bbb-learning-dashboard to fail
quoting missing @mui/base dependencies.
Reverting to the json we use on BBB v3.0 seems to have stablized
things, at least for the time being.
* docs: various docs updates to include what is new
in BBB 3.0.0-beta.1
- api changes
- dropped functionality
* docs: info on infinite whiteboard and seen chat
Adding screenshots and descriptions about the infinite whiteboard and
seen private chat messages to the 3.0 what-is-new docs
* docs: Add description of 3.0 new audio features
- improved ux for transparent listen only
- push to talk
* Introduce Hasura override config and a password file
* Add message when set a password to Hasura
* add logs to inspect errors
* fix config file name
* test changing key file owner
* test without override file
* fix print status
* store password as env var
* changes suggested in PR
* Introduce Gql-Middleware config as a yml file
* use path /usr/share/bbb-graphql-middleware/ instead of /usr/local/bigbluebutton/bbb-graphql-middleware
* remove /etc/default/bbb-graphql-middleware file
Adjust an inline comment in connection status' service about packet loss metric
usage.
Now it correctly states that the absolute counter SHOULD NOT be used for
alert triggers.
In 3.0, the packet loss metric used to trigger connection status alerts was
changed to the one generated by the `startMonitoringNetwork` method used by the
connection status modal. Since packet loss thresholds were not adjusted (0.5,
0.1, 0.2), a single lost packet causes the status alert to be permanently
stuck on "critical". This is explained by how different those metrics
are:
- **Before (2.7):** A 5-probe wide calculation of inbound packet loss
fraction based on `packetsLost` and `packetsReceived` metrics.
- **Now (3.0):** An absolute counter of inbound lost packets.
This commit restores the previous packet loss metric used to trigger
connection status alerts, reverting to the original collection method via
`/utils/stats.js`. This resolves the issue, but further work is needed in
subsequent PRs:
- Unify the collection done in `/utils/stats.js` with the
`startMonitoringNetwork` method.
- Incorporate the remote-inbound `fractionsLost` metric to account for packet
loss on both legs of the network (in/out).
- Update the packet loss metric displayed in the connection status modal to
show a more meaningful value (e.g., packet loss percentage over a specific
probe interval). An absolute counter of lost packets isn't useful for end
users.
- Update the alert log to use the fraction or percentage above