output 2 more statistics, which are the mean absolute error and the standard
deviation of the absolute error. This means these functions now return 4D
rather than 2D vectors.
I also made test_regression_function() take a non-const reference to the
regression function so that DNN objects can be tested.
* Remove explicit specification of library path in dlib.cmake
Enables side-by-side multi configuration build on windows
* Add dlib_LIBS
For backwards compatability
0.5*MSE. The only thing this effects is the logging messages that print during
training, which were confusing since the reported loss was half the size you
would expect.
be smaller. Instead, they now behave like std::vector in that they just change
their nominal size but keep the same memory, only reallocating if they are
resized to something larger than their underlying memory block.
This change makes some uses of dlib faster, in particular, running networks on
a large set of images of differing sizes will now run faster since there won't
be any GPU reallocations, which are notoriously slow.
when we switched everything to std::shared_ptr. Turns out std::shared_ptr has
some surprising limitations. This change fixes a bug where the program crashes or hangs
sometimes during program shutdown.
included in the edge graph. If it isn't then the output labels from
chinese_whispers would be missing faces in this degenerate case. So basically this fixes a bug
where chinese_whispers(), when called from python, would sometimes return a labels array
that doesn't include labels for all the inputs.
dimensions in the same format as the mmod_options object (i.e. two lengths
measured in pixels). This should make defining random_cropping strategies that
are consistent with MMOD settings much more straightforward since you can just
take the mmod_options settings and give them to the random_cropper and it will
do the right thing.
* Use banded Cholesky factorization if possible
Computation cost from n.n.n -> n.n.b where b is band size
* Tidy up whitespace
* Remove typo
* Escape from banded matrix detection correctly
* Use LAPACK banded Cholesky factorisation where possible
* Add banded chol tests
* Add test for banded chol in column major layout
* Use row major layout for banded chol - more efficient as we will pass to LAPACK
early iterations since the model might produce a huge number of false alarms
while the detector is still bad. Processing all these detections can cause it
to run slowly until the model is good enough to avoid really excessive amounts
of false alarms. This change puts more of a limit on the number of false
alarms processed during those early iterations and avoids the slowdown.