This is a refinement of the fix introduced in 3c1a9cd7c4.
Further investigation of the issue reveals that the cause of the bad
timestamps which result in error messages or hang is when the `apad`
filter does not receive any input audio frames. This happens if the
seekpoint is after the end of audio. The existing seekpoint checks can't
cover this, because in a deskshare file the video stream can continue on
after the audio stream ends.
The `,asetpts=N` filter was added to deal with this problem, and it
works in most cases. But there is a case where it fails - when the
"mismatched length" audio stretching to work around broken recordings
from BigBlueButton 0.81/0.90 kicks in.
The issue there is that the `atrim=start=S` filter (used due to the
difficulty of calculating seeks when stretching) hits the invalid
timestamps and hangs.
I'm working around this issue with a "defense in depth" combination of
two changes:
* Move the `,asetpts=N` filter to be applied before the audio stretch
filters. This fixes the processing hang.
* Adjust the conditions on the "mismatched length" audio stretching so
that it only gets applied on audio files likely to be from extremely
old BigBlueButton versions - those with audio in wav files, or encoded
to vorbis.
BigBlueButton has been using audio recorded directly to opus by
FreeSWITCH for quite a while, and the handling for gaps or lost packets
is done in current BBB versions by a combination of the libopus decoder
and the use of ffmpeg's `aresample=async=1000` filter to dynamically
stretch, squish, or fill in audio so it becomes a continuous stream
that's locked to the file timestamps. Applying the "mismatched length"
processing on top of that is probably making audio sync issues worse.
Recording archive may fail when remuxing invalid files from KMS or the
new recorder - eg when the raw files are 0-byte sized.
This commit handles the exception raised by EDL::encode so archive keeps
going, logs the issue as a warning and archives the problematic file anyways.
EDL::encode now removes the temporary file when the ffmpeg command execution
fails - this should avoid leaving any stale files around in case of failure.
No specific check for the nature of the error is done - the idea is that
subsequent phases will discard or fix the files if necessary according
to the processing scripts' necessities, making the behavior (in this
specific scenario) similar to what it was before the archive remuxing was
introduced.
The Fcntl::F_SETPIPE_SZ constant was added in Ruby 3.0, but Ubuntu 20.04
still uses Ruby 2.7. Add some error handling so processing doesn't fail
if the constant is not found.
The encoding settings for intermediate files was using -preset veryfast
-crf 30, which resulted in very poor quality video.
After a bit of experimenting, I decided to change this to -preset
veryfast -crf 23. This results in files which are roughly twice the size
of before, but they look significantly better.
There's improvements possible at the same filesize by switching to a
slower encoding mode. But in the case of -preset medium for example,
when normalized to the same output file size, you end up using about 1.5
times as much cpu time to gain only a very small amount of video
quality.
A comparison was being done against the wrong variable, resulting in the
empty string filename being added to the inactive videos list. This
caused a crash later in the code.
In a particular case where you have a large timestamp gap followed by a
frame which re-initializes the filters in the pre-processing ffmpeg
(e.g. due to a resolution change), the fps filter will keep generating
frames to fill this gap even if downstream filters aren't accepting more
frames. Add a trim filter which will eat the frames past the desired end
timestamp to prevent them from getting queued up.
Additionally, in cases with unlucky timing on the filter
re-initialization, the pre-processing ffmpeg can end up generating some
output past the set end time. Since the compositing ffmpeg exits once it
has read enough input, this can cause the pre-processing ffmpeg to fail
with a "Broken pipe" error. To work around this problem, the processing
scripts themselves can open the pipe for reading to hold it open, and
then send a signal to the pre-processing ffmpeg to tell it to exit. This
results in ffmpeg exiting with the return code 255, which can be
distinguished from actual errors.
As a bonus, opening the fifo in the processing script allows increasing
the size of the pipe buffer, which should result in slightly better
performance.
The recording processing would crash if an area was present in the
layout, but was missing in the EDL entry being processed.
This doesn't happen in normal conditions, since most of the methods for
generating an EDL will result in areas being present, even if there are
no videos for that area.
But the EDL cleanup recently added can sometimes cause an EDL entry with
no areas to be processed, so add the code to handle this possibility.
The tpad filter doesn't actually extend the video backwards (i.e. it
does not create frames with timestamps earlier than the first video
timestamp). Instead, it *delays* the start of the video.
Using it incorrectly was causing audio/video desyncs in the desktop
sharing, and also occasional processing failure if it pushed back the
video enough that the compositing ffmpeg process didn't end up reading
to the end of the input video.
Use the fps filter's "start_time" option instead, which *does* extend
the video backwards to the configured start time by duplicating the
first frame.
The "EDL" provided to the recording video processing can sometimes
contain a series of cuts in very close succession - just milliseconds
between them - purely by chance (e.g. two webcams disconnect at almost
the same time). Right now this can result in segments failing to
process (in some rare cases) or if a segment processes but was detected
to be empty (no frames), it'll get discarded.
There's also some problems which can occur that cause a too-short
recording - just milliseconds between start and stop, or between start
and the meeting end - which also currently fail to process. We've found
it's better for user feedback/support if the recording successfully
processes with no content in this case.
Add a cleanup function that goes through the EDL and consolidates cuts
which are too close together, and ensures that the final output meets
a minimum length.
There was a brief period during 2.5 development when recordings had the
typo "webacms" instead of "webcams" on one of the event names. I hit
this now and then, so just check for both names in the recording
processing.
The code no longer retrieves chat or avatar color, so stop checking for
those in the tests.
Fix the get_chat_events code to include the sender name when the sender
id is not available (this only happens on *really* old recordings, but
it's a trivial fix).
A few minor updates and fixes to the video recording format:
* The 'show_moderator_viewpoint' recording setting is now honoured.
* The desktop sharing video replaces the presentation area - it no
longer hides webcams (it now matches the live meeting).
* The 'playback_protocol' recording setting is now honoured (recording
links will correctly use https when that's configured).
When a deskshare stream with combined audio + video starts up, it can
happen that the audio starts before the video - so the first video frame
will be some amount of time after the file start.
If there's a recording processing cut in this gap, the procesing can
crash because it can generate an output video with no video frames.
There are two parts to the fix:
* Trim input videos with the trim filter, configured to ensure at
least 1 output frame is generated, even if it would be after the
end timestamp.
* Use the tpad filter to pad the *start* of a video stream to make
sure there's something in the gap.
In cases of extremely short (single frame) input videos, the fps filter
can sometimes generate 0-frame output videos, resulting in the tpad
filter having no input (this breaks it, causing a busy loop).
Move the tpad filter to before the fps filter to solve this problem.
This isn't perfect, since the tpad filter doesn't work well on variable-
framerate video (it generates extremely high framerate video with a lot
of frames that will be discarded), but this only happens between the
tpad and fps filters, and only at the end of an input video (usually
right before a cut) so this seems acceptable.
Since the tpad and fps filter are in the same process, these duplicate
frames don't actually require copying any data (the frame is
reference-counted), and still process reasonably quickly.
Fixes#16407
The tpad filter is problematic on the variable-framerate webcam files,
and the result can end up being hangs (or, at least, very slow
processing) in the compositing.
Move the tpad filter to the compositing process where it can run after
the fps filter has converted the video to constant framerate. It still
needs to run before the start trimming, so switch to using the trim
filter rather than the fps filter's start_pts feature.
Even with the filter changes made, there's still some cases where
filter chain hangs can result from filter reconfigurations. To solve the
issue completely, I have split out pre-processing video files to
separate ffmpeg processes, so that the filter chain for compositing will
not ever be reconfigured.
Each input video now has a separate ffmpeg process run for it which
does the scaling, padding, and video extending steps. To avoid issues
with disk space usage and extra cpu usage or quality loss, the output
from these separate processes is sent to the compositing ffmpeg process
as uncompressed video in a pipe. To simplify the setup, named pipes
(special files) are used rather than setting up pipes in the ruby code
programmatically.
The extra ffmpeg processes are configured to log to files, and when
complete their log output is copied to the recording processing log.
Processes are joined to ensure zombie processes are not created, and
the return codes of all the processes are checked so errors can be
detected.
Due to the overhead of transferring video through pipes, this might
be a bit slower than the 2.4 recording processing - but on the other
hand, some of the video decoding and scaling happens in parallel, so it
might balance out.
Because the input videos for BigBlueButton recording processing switch
resolution and aspect ratio, the filter chain gets re-initialized, and
any state in the filters is lost. This causes problems with the
following filters:
`color`: Timestamps restart from 0, rather than continuing at the point
where they left off.
`fps=start_time=12.345`: After reset, the fps filter thinks it's at the
start of the file again, so the next frame it sees gets duplicated
output for timestamps from the `start_time` until it catches back up.
`setpts=PTS-STARTPTS`: The 'STARTPTS' is re-read as the first pts the
filter sees after reinitialization, so timestamp of the next frame is
reset to 0. (In practise, this didn't cause any problems because the
duplicate frames created by the fps filter had the original start time.)
The end result of all of these issues is that a lot of duplicate frames
were created with invalid timestamps, which eventually get discarded
by ffmpeg. But a lot of time is wasted, causing recordings to sometimes
take hours to process when they should be ready in minutes.
The fixes are as follows:
* The `color` filters are used to generate the background and
substitutes for missing videos. Move them out to separate filter
chains by using the 'lavfi' input format, which lets you use a filter
as if it was an input file.
* Rather than use the `fps` filter's `start_time` feature, use the
`trim` filter to remove early frames.
* The actual start pts is already known by the script, so replace
`setpts=PTS-STARTPTS` with `setpts=PTS-12.345/TB`, substituting in the
absolute time.
When converting from using the 'movie' source filter to using separate
ffmpeg command line inputs, it wasn't taken into account that the
'movie' filter passes through timestamps from the source file, while the
ffmpeg input adjusts timestamps so that '0' is the selected seek point.
The easiest fix is to add an option to the ffmpeg command to disable the
timestamp adjustments.
Fixes#15644 (This needs to go into BigBlueButton 2.5 and 2.6)
Changed the names of tldraw record events to differentiate from before.
Publish tldraw.json file with all shape information during the meeting to be used in playback.
Adapted cursor.xml and panzoom.xml to store tldraw data.
Publish slides svgs to be used by playback's tldraw component (otherwise we have different image sizes in pngs and thus messing the coordinates).
Retro-compatible with old recordings.
Based on work done by Tiago Jacobs and Guilherme Pereira Leme to
investigate the behaviour of ffmpeg on videos with changing input size.
Rather than having the EDL code set fixed sizes for the scale and pad
filters, instead use ffmpeg's built in features to calculate the scale
and pad automatically, dynamically updating if the input video size
changes.
In the version of ffmpeg in Ubuntu 20.04, the 'movie' input filter is
buggy and doesn't correctly handle the video size changing. Instead of
using the movie filter, use separate inputs to the ffmpeg command (the
design used here is based on the code in audio.rb). The filter chain is
now stored into a file (using -filter_complex_script) to reduce problems
with the command line getting too long.
We are using a version of ffmpeg that's new enough to have the tpad
filter now too, so use it to extend the video if needed instead of an
extra concat.
In some cases with incomplete/partially corrupt files, the input video
file can be shorter than the displayed time. If there is a badly timed
cut, this can result in a seek being generated to a point where ffmpeg
is unable to start at.
Add a detection for this situation, and replace with a blank video.
Move all Etherpad's access control from Meteor to a separated [Node application](https://github.com/bigbluebutton/bbb-pads).
This new app uses [Etherpad's API](https://etherpad.org/doc/v1.8.4/#index_overview)
to create groups and manage session tokens for users to access them. Each group
represents one distinct pad at the html5 client.
- Removed locked users' access to pads: replaced readOnly pad's access with a new pad's content sharing routine
- Pad's access is now controlled by [Etherpad's API](https://etherpad.org/doc/v1.8.4/#index_overview)
- Closed captions edited content now reflects at it's live feedback
- Improved closed caption's dictation mode live feedback
- Moved all Etherpad's API control from Meteor to a separated [app](https://github.com/bigbluebutton/bbb-pads)
- Included access control both in akka-apps and bbb-pads
There's an issue sometimes where when processing the audio from a
desktop sharing file, the STARTPTS value is invalid somehow, and this
results in bad timestamps in the output stream. Depending on the
selected codec/container combination, this might result in errors such
as spam of the message:
Application provided invalid, non monotonically increasing dts to muxer in stream
or simply a hang during processing.
Since we're already using aresample in async mode to correct timestamp
gaps, we can use "asetpts=N" here to simply set the audio frame pts
values to the sample number directly, giving proper monotonic timestamps
with no gaps.
Move the handling of chat events into the shared library so it can be
used by multiple recording formats.
The anonymization of names is based on the external user id, if
available, so users have a consistent name through the meeting. Note
that no effort is made to edit chat messages - if someone is mentioned
by name in a chat message, that will still be visible.
Default settings for anonymization can be controlled in
bigbluebutton.yml, and per-meeting overrides can be done using meta
parameters on the create call.
Disabling audio or video processing isn't really something that's part
of the working format (and at some point we might want to combine
audio+video processing together).
Move the setting of the '-an' and '-vn' options to where they're
required - as a detail of the EDL processing for video-only and
audio-only components of the output.
Honestly, the main reason for this change is that when testing alternate
working formats, I had accidentally dropped the '-vn' option from the
FFMPEG_WF_ARGS variable without noticing.
Generate a external_videos.json file at the recordings with an array of
played external videos url and timestamp.
This file will be published along with the other presentation format files
and can be used to display at the playback.