The rap scripts might load or run some scripts using relative paths from
the scripts directory, so restore that.
Bundler automatically looks up in parent dirs to find the Gemfile, so
loading gems will work correctly.
Several scripts internally run bundler setup, so no explicit bundler
command is needed. For the others, start up using /usr/bin/bundle
(installed by ruby-bundler) to load the environment.
Now uses Ubuntu's bundler version to install all dependencies at build time
rather than install time. Gems are also now vendored, and no longer pollute the
operating system.
Move all Etherpad's access control from Meteor to a separated [Node application](https://github.com/bigbluebutton/bbb-pads).
This new app uses [Etherpad's API](https://etherpad.org/doc/v1.8.4/#index_overview)
to create groups and manage session tokens for users to access them. Each group
represents one distinct pad at the html5 client.
- Removed locked users' access to pads: replaced readOnly pad's access with a new pad's content sharing routine
- Pad's access is now controlled by [Etherpad's API](https://etherpad.org/doc/v1.8.4/#index_overview)
- Closed captions edited content now reflects at it's live feedback
- Improved closed caption's dictation mode live feedback
- Moved all Etherpad's API control from Meteor to a separated [app](https://github.com/bigbluebutton/bbb-pads)
- Included access control both in akka-apps and bbb-pads
This script is failing for older recordings with no fill attribute. In that case,`event.at_xpath('fill')` is `nil`, failing once `.text` on `nil` is called.
+ Use guard clauses where applicable
+ Use JSON parse over JSON load
+ Rescue StandardError instead of Exception
+ Formatting / Stylistic choices json parse over load
Uses archived raw files directly instead of copying them over.
Specifies absolute paths in XPATH expressions for faster query speeds
Linting changes:
Removes useless assignments to ext, metadata_with_playback variables
Use frozen string literals
Use YAML safe load and JSON parse over load
Rescue StandardError instead of Exception
There's an issue sometimes where when processing the audio from a
desktop sharing file, the STARTPTS value is invalid somehow, and this
results in bad timestamps in the output stream. Depending on the
selected codec/container combination, this might result in errors such
as spam of the message:
Application provided invalid, non monotonically increasing dts to muxer in stream
or simply a hang during processing.
Since we're already using aresample in async mode to correct timestamp
gaps, we can use "asetpts=N" here to simply set the audio frame pts
values to the sample number directly, giving proper monotonic timestamps
with no gaps.
I was going to resort to some trickery to make mediasoup raw files end up in the
same directory as KMS to reduce changes, but it ended up being too dirty.
I am adding a third directory (/var/mediasoup) to be tracked by the rap scripts
which is where the new raw files end up in.
Move the handling of chat events into the shared library so it can be
used by multiple recording formats.
The anonymization of names is based on the external user id, if
available, so users have a consistent name through the meeting. Note
that no effort is made to edit chat messages - if someone is mentioned
by name in a chat message, that will still be visible.
Default settings for anonymization can be controlled in
bigbluebutton.yml, and per-meeting overrides can be done using meta
parameters on the create call.
Disabling audio or video processing isn't really something that's part
of the working format (and at some point we might want to combine
audio+video processing together).
Move the setting of the '-an' and '-vn' options to where they're
required - as a detail of the EDL processing for video-only and
audio-only components of the output.
Honestly, the main reason for this change is that when testing alternate
working formats, I had accidentally dropped the '-vn' option from the
FFMPEG_WF_ARGS variable without noticing.
Generate a external_videos.json file at the recordings with an array of
played external videos url and timestamp.
This file will be published along with the other presentation format files
and can be used to display at the playback.
Generate a poll JSON file at the recordings with an array of published
polling data:
- type;
- question;
- answers;
- number of responders; and
- number of respondents
This file will be published along with the other presentation format files
and can be used to display the poll results at the playback.
Previously, bbb-record --rebuild was restarting recording processing
from scratch by creating the .../recording/<meeting_id>.done file. This
causes the recording to be reprocessed starting at the archive step.
However, re-running the archive step for an existing meeting is not
really supported! Ever since the segmented recording code was added, it
shouldn't /corrupt/ the recording files, but it's still not good.
And as a side-effect, re-running the archive step will re-create the
.norecord file for meetings without recording marks, meaning that you
cannot use bbb-record --rebuild to force a recording without marks to be
processed.
Switch bbb-record to restart recording processing at the sanity stage to
match the BBB 2.2 behaviour. Rather than have it insert tasks directly
into resque via redis-cli, it goes through a ruby wrapper that performs
input validation and uses the resque apis.
In the case where a meeting had recording enabled (record=true on create
call) but the presenter did not start recording during the meeting,
recording processing needs to be stopped after the meeting data is
archived, but before the recording formats are processed.
In the current 2.3 code, processing is halted after the "sanity" step.
However, the 2.2 code stopped processing after the "archive" step
instead. The main difference is that the scripts in the "post_archive"
directory (which are actually post_sanity scripts) did not get run on
non-recorded meetings for 2.2. This behaviour should be preserved for
compatibility.
I have added a special exception to trigger halting processing for a
recording job without causing the entire resque job to be marked as
failed. It only causes the `schedule_next_step` method to be skipped, so
following jobs won't get automatically run. This fixes#11877
This function is useful any place you want the matched recording marks
with timestamps rebased so 0 is the start of the meeting, I've used it
for chat analysis, for example.
There is no functional change here, it only exposes the extra function
for recording scripts or dropin/post scripts to use.
Since Meteor was split in multiple process and events started to be
filtered by instances, all Etherpad's Redis events were being discarded.
Etherpad has a Redis' publisher plugin that is unaware of BigBlueButton's
existence. All the communication between them is kept simple with minimal
of internal data exchange. The concept of distincts subscribers at Meteor's
side broke part of this simplicity and, now, Etherpad has to know which
instance must receive it's messages. To provide such information I decided
to include Meteor's instance as part of the pad's id. Should look like:
- [instanceId]padId for the shared notes
- [instanceId]padId_cc_(locale) for the closed captions
With those changes the pad id generation made at the recording scripts had to
be re-done because there is no instance id available. Pad id is now recorded at
akka-apps and queried while archiving the shared notes.
We still use the recording status files to externally monitor the
progress of the recordings. Let's keep these files for now until
we figure out a different way to track the status of the recording.
This incorporates only the audio desync related changes from #11626
* Add the aresample filter with async option to fill in timestamp gaps
* Use the libopus decoder for opus audio instead of ffmpeg's builtin
decoder
This gives the following advantages over the previous code:
* The ffmpeg input filters are loaded from a filter "script" file instead
of passed on the command line. This fixes some cases of recordings
failing to process because the ffmpeg command line generated for the
audio processing exceeded the max command line length limit. (Although
that only really happens due to BBB bugs...)
* Use absolute positions when trimming audio segments for cuts.
Previously segments were trimmed to the length of the segment, and the
results were concatenated. There's some possibility of accumulated
errors in the segment lengths causing audio desync over time. The new
code incrementally concatenates the segments, and cuts each segment
end based on the absolute time since the start of the meeting, to
avoid error accumulation.
use libopus decoder and encoder, its better than built-in ffmpeg/flac
don't mix screenshare audio with mics, was generating desync with bad audio segments, encode it together with video file (TODO: needs adjustments in playback)
When managing Etherpad's pads, Meteor makes API calls to initiate the closed captions
and shared notes modules. The pad id was being mapped to a shorter id than the meeting
id because of a Etherpad lenght limitation.
Changed to something less guessable.
Collects the shared notes' HTML raw data and publishes it along with the other
recording files. The playback will fetch for this file and include an option to
display it's content over the chat.
The indexes returned in recording events from BBB refer to positions
within a UTF-16 encoded string. Rather than attempt to untangle this in
the server (which might have a performance cost), it's easier to switch
the caption processing code to operate in UTF-16 encoding as well to
make it work consistently.
The PyICU library provides a UnicodeString type which is a UTF-16 string
similar to Java and JavaScript, but which supports all the python
indexing methods. It's fairly straightforwards to swap it in in place of
the types used previously, and works natively as an input to the ICU
line break iterator too.
Fixes#10531
The previous implementation of the BigBlueButton.execute method runs the
process with separate stdout and stderr streams. It first reads all of
the output from stdout, then reads all of the output from stderr.
This can cause a deadlock if the process writes a lot of data to stderr.
The IO buffer for stderr could fill, blocking progress. But since it
hasn't closed stdout, the ruby script is still waiting on a read to
stdout.
Switch to an execution method (using IO.popen) that allows combining
stdout and stderr into a single stream, eliminating the issue.
On my server 2.3 alpha, the method metadata_for(meeting_id) gives back {}
(empty Hash). Thus "return if meeting_metadata.nil?" does not occur.
Does @redis.hgetall give {} instead of nil, even though there is a comment in
node_modules/redis/lib/utils.js "hgetall converts its replies to an Object. If
the reply is empty, null is returned"???
On my server 2.3 alpha, the method metadata_for(meeting_id) gives back {}
(empty Hash). Thus "return if meeting_metadata.nil?" does not occur.
Does @redis.hgetall give {} instead of nil, even though there is a comment in
node_modules/redis/lib/utils.js "hgetall converts its replies to an Object. If
the reply is empty, null is returned"???
Update the list of invalid characters based on what the XML
specification permits and discourages.
Use the ruby string `scrub` method to remove invalid characters that
can't be expressed in the `tr` syntax, like unpaired surrogates and
UTF-8 prefix bytes.
This commit fixes an issue with reading and writing files.
File.open is used which means that a file will remain open
unless explicilty closed or the program exit. This doesn't work
for an NFS mount as the scripts try to "rm -rf" when the file
is still open. This commit fixes that by replacing all .opens
with .reads
The "originalFilename" tag is not present in "SharePresentationEvent", and thus presentation_filename would never be "default.pdf". As a consequence, the thumbnails are always generated from the default.pdf.