In some unusual cases, the recording start can be the last event in the
events file, or at least have the same timestamp as such.
Add some code to check the array bounds and break if needed, so we
don't check the timestamp on the (non-existant) event after the last
event.
The previous code looked for stop events and tried to find their
associated start event. This obviously doesn't work if there was
no stop event. But if there was a start event, we need to show the
deskshare… so rework to code to try to find the matching stop to each
start instead, and use the end of the meeting if no matching stop was
found.
This is just a bundle of a few things I've been fixing up in the past
while.
= Workaround for BBB 1.1 beta deskshare timestamp bug
This is unlikely to be used, but I have the code for it, might as
well merge it in.
= Rework video tiling code for ffmpeg
Render video using the 'hstack' and 'vstack' filters rather than the
'overlay' filter. This is somewhat faster, particularly with lots of
videos.
= Etc.
- Remove usage of the streamio-ffmpeg gem.
The video rendering code has some stuff to directly read 'ffprobe'
output, so re-use that instead of this gem (which is kind of old and
has issues with newer ffmpeg versions).
- Don't hardcode the deskshare video area size, pull it from the
properties file
- Remove some code that worked around missing video end events.
In some cases this could cause flickering or strange video issues.
It's no longer strictly needed, the new tiling code doesn't break if
the seekpoint is after the end of the video.
Removed from the code, now it can be specified in the properties file.
Makes it easier to include new processing scripts and control dependencies
between them.
With resque workers we don't need to use status files to trigger each
step of the recording process. So instead of using status files as triggers
they are not used only for information, so that the workers can know if
a e.g. processing script failed or succeeded.
For now the differences are that the archive worker will run inside
resque (it is a resque worker now) and there is a "rap-trigger" file that
is executed by systemd to detect recordings that need to be archived and
add a job on resque to archive them.
The process os scheduling jobs still needs to be reviewed, but for now will
be done by rap-trigger.
Current BBB client is generating invalid indexes when characters are
inserted with an IME - the edit indexes are as if the preedit text was
being removed, but the preedit text was never sent in the first place.
For now, just don't crash if there's an edit that would remove text
which is past the end of known text. The result might be broken, but
it won't prevent the rest of the recording from working.
I've had a chance to see how this script behaves with actual live
caption tracks now, and there's room for improvement. In particular,
it often generates cues that overlap - the next one appears before the
previous one disappears. The browser player position handles this
really poorly, and it's nearly unreadable.
The solution is to, if two cues would overlap, merge them into a
single multiline cue that displays for the full time. This is a lot
easier to read.
Some extra code is added to de-overlap any remaining cues (e.g. if
there's a third cue that would also overlap). This will reduce the
time that the earlier cue gets shown below my preferred minimum, but
not really much we can do about that if people are talking/typing
quickly.
The code can easily be tweaked to set a different number of maximum
lines per cue if desired.
If processing didn't succeed, the directory where it is trying to write
the file might not exist, causing an exception in the worker which leads
to it getting stuck in a retry loop.
The purpose of this is to ensure that the recording processing with make
visible forwards progress, by ensuring that different steps in the
recording pipeline don't block each-other.
This fixes:
- recordings being processed prevent archiving new recordings from running
- recordings can't be published until all pending recording processing
completes
It does allow recording archive, sanity, process, publish steps to run in
parallel with each other, but at most one recording will be in each step at
a time. (I.e. while one recording is being processed, a different recording
can be published). This may potentially increase CPU usage for some users.
If you expect this to be a problem, you can set resource controls (see
`man systemd.resource-control`) on the bbb_record_core.slice systemd unit.