The 'mkclean' tool reorganizes the encoded webm file to optimize it for
streaming. In particular, it moves the index to the start of the file.
This fixes streaming in Chrome, which otherwise had a very long delay
before playback started since it downloaded until it saw the index
before it started playback.
This needs a new dependency added to the bbb-record-core package to
pull in the mkclean tool.
I had it on DEBUG temporarily for testing. The old version used ERROR,
but this made it print virtually no output which made diagnosing
issues difficult.
It now does much less directory reading, and performance should
scale far better with large numbers of recordings.
Semantics should be mostly unchanged, but there's greater use
of '.fail' files to mark errors, and '.done' files are now removed
after all of the following processing steps complete.
The rap worker no longer relies on processing scripts leaving
behind empty directories; those are now removed where appropriate.
There are many codes other than '404' which indicate that the media
file is not present or otherwise unusable. Instead of checking for
a particular failure mode, check for explicit success.
I ran into this on a server that had directory listing disabled, and
returned a 403 (access denied) error on non-existant files.
We can only get here if all of the files for the presentation are
*completely* missing. One thing that can cause this is if the
presentation filename starts with a '.' character - the presentation
files aren't correctly archived then.
Firefox has a bug where it can't seek in the audio-only webm files with
no cues, and it seems like they have no intention of fixing this...
Serve up the ogg files first for them.
Adding cues to audio-only webm files is *hard*, there are no standard
tools that support doing this cleanly.
This shouldn't normally be hit... but if it ever is, the processing will
fail with an error, since the Logger class doesn't have a method named
'warning'.
We now use ghostscript to output pngs directly from the original pdf,
rather than using convert on the split pages. This should make corrupt
or strange pdfs less likely to cause issues.
As well, if a pdf page conversion fails (for any reason, including that
the original pdf is missing...) it will be logged, and a blank page
generated, and processing will continue.