Recording archive may fail when remuxing invalid files from KMS or the
new recorder - eg when the raw files are 0-byte sized.
This commit handles the exception raised by EDL::encode so archive keeps
going, logs the issue as a warning and archives the problematic file anyways.
EDL::encode now removes the temporary file when the ffmpeg command execution
fails - this should avoid leaving any stale files around in case of failure.
No specific check for the nature of the error is done - the idea is that
subsequent phases will discard or fix the files if necessary according
to the processing scripts' necessities, making the behavior (in this
specific scenario) similar to what it was before the archive remuxing was
introduced.
Kurento may *rarely* generate WebM/MKV files with corrupt or absent
SeekHead sectors. bbb-webrtc-recorder also doesn't generate SeekHead or
even the Cues sectors by default.
While those are are *optional* fields by spec, files need to be seekable
for our recording processing scripts to work.
This commit adds a remuxing step for Kurento and bbb-webrtc-recorder raw
files that is executed during the archive phase. It should re-include
any of the missing fields that make files seekable and restore the Cues
sector in WebM files.
The code was skipping the check for cursor x or y position < 0 when the
tldraw whiteboard was in use. That condition is still needed on the
tldraw whiteboard to indicate that the cursor should be hidden.
Only the check for cursor x or y position >100 needs to be skipped when
the tldraw whiteboard is in use (since tldraw cursors are in the slide
coordinate space, they can go up to x=1440 or y=1080)
The Fcntl::F_SETPIPE_SZ constant was added in Ruby 3.0, but Ubuntu 20.04
still uses Ruby 2.7. Add some error handling so processing doesn't fail
if the constant is not found.
The encoding settings for intermediate files was using -preset veryfast
-crf 30, which resulted in very poor quality video.
After a bit of experimenting, I decided to change this to -preset
veryfast -crf 23. This results in files which are roughly twice the size
of before, but they look significantly better.
There's improvements possible at the same filesize by switching to a
slower encoding mode. But in the case of -preset medium for example,
when normalized to the same output file size, you end up using about 1.5
times as much cpu time to gain only a very small amount of video
quality.
A comparison was being done against the wrong variable, resulting in the
empty string filename being added to the inactive videos list. This
caused a crash later in the code.
In a particular case where you have a large timestamp gap followed by a
frame which re-initializes the filters in the pre-processing ffmpeg
(e.g. due to a resolution change), the fps filter will keep generating
frames to fill this gap even if downstream filters aren't accepting more
frames. Add a trim filter which will eat the frames past the desired end
timestamp to prevent them from getting queued up.
Additionally, in cases with unlucky timing on the filter
re-initialization, the pre-processing ffmpeg can end up generating some
output past the set end time. Since the compositing ffmpeg exits once it
has read enough input, this can cause the pre-processing ffmpeg to fail
with a "Broken pipe" error. To work around this problem, the processing
scripts themselves can open the pipe for reading to hold it open, and
then send a signal to the pre-processing ffmpeg to tell it to exit. This
results in ffmpeg exiting with the return code 255, which can be
distinguished from actual errors.
As a bonus, opening the fifo in the processing script allows increasing
the size of the pipe buffer, which should result in slightly better
performance.
The recording processing would crash if an area was present in the
layout, but was missing in the EDL entry being processed.
This doesn't happen in normal conditions, since most of the methods for
generating an EDL will result in areas being present, even if there are
no videos for that area.
But the EDL cleanup recently added can sometimes cause an EDL entry with
no areas to be processed, so add the code to handle this possibility.
The tpad filter doesn't actually extend the video backwards (i.e. it
does not create frames with timestamps earlier than the first video
timestamp). Instead, it *delays* the start of the video.
Using it incorrectly was causing audio/video desyncs in the desktop
sharing, and also occasional processing failure if it pushed back the
video enough that the compositing ffmpeg process didn't end up reading
to the end of the input video.
Use the fps filter's "start_time" option instead, which *does* extend
the video backwards to the configured start time by duplicating the
first frame.
The "EDL" provided to the recording video processing can sometimes
contain a series of cuts in very close succession - just milliseconds
between them - purely by chance (e.g. two webcams disconnect at almost
the same time). Right now this can result in segments failing to
process (in some rare cases) or if a segment processes but was detected
to be empty (no frames), it'll get discarded.
There's also some problems which can occur that cause a too-short
recording - just milliseconds between start and stop, or between start
and the meeting end - which also currently fail to process. We've found
it's better for user feedback/support if the recording successfully
processes with no content in this case.
Add a cleanup function that goes through the EDL and consolidates cuts
which are too close together, and ensures that the final output meets
a minimum length.
There was a brief period during 2.5 development when recordings had the
typo "webacms" instead of "webcams" on one of the event names. I hit
this now and then, so just check for both names in the recording
processing.
The code no longer retrieves chat or avatar color, so stop checking for
those in the tests.
Fix the get_chat_events code to include the sender name when the sender
id is not available (this only happens on *really* old recordings, but
it's a trivial fix).
A few minor updates and fixes to the video recording format:
* The 'show_moderator_viewpoint' recording setting is now honoured.
* The desktop sharing video replaces the presentation area - it no
longer hides webcams (it now matches the live meeting).
* The 'playback_protocol' recording setting is now honoured (recording
links will correctly use https when that's configured).
When a deskshare stream with combined audio + video starts up, it can
happen that the audio starts before the video - so the first video frame
will be some amount of time after the file start.
If there's a recording processing cut in this gap, the procesing can
crash because it can generate an output video with no video frames.
There are two parts to the fix:
* Trim input videos with the trim filter, configured to ensure at
least 1 output frame is generated, even if it would be after the
end timestamp.
* Use the tpad filter to pad the *start* of a video stream to make
sure there's something in the gap.
In cases of extremely short (single frame) input videos, the fps filter
can sometimes generate 0-frame output videos, resulting in the tpad
filter having no input (this breaks it, causing a busy loop).
Move the tpad filter to before the fps filter to solve this problem.
This isn't perfect, since the tpad filter doesn't work well on variable-
framerate video (it generates extremely high framerate video with a lot
of frames that will be discarded), but this only happens between the
tpad and fps filters, and only at the end of an input video (usually
right before a cut) so this seems acceptable.
Since the tpad and fps filter are in the same process, these duplicate
frames don't actually require copying any data (the frame is
reference-counted), and still process reasonably quickly.
Fixes#16407