Commit Graph

6994 Commits

Author SHA1 Message Date
Davis King
8b5c04d075 Updated code to work with new regression test output. 2017-11-10 17:42:40 -05:00
Davis King
df0f62d470 merged 2017-11-10 16:57:35 -05:00
Davis King
6137540b27 Changed test_regression_function() and cross_validate_regression_trainer() to
output 2 more statistics, which are the mean absolute error and the standard
deviation of the absolute error.  This means these functions now return 4D
rather than 2D vectors.

I also made test_regression_function() take a non-const reference to the
regression function so that DNN objects can be tested.
2017-11-10 16:56:37 -05:00
Davis King
5a0c09c775 Fixed compiler warning 2017-11-10 16:52:20 -05:00
Sean Warren
fef4a3657b Remove explicit specification of library path in dlibConfig.cmake (#935)
* Remove explicit specification of library path in dlib.cmake
Enables side-by-side multi configuration build on windows

* Add dlib_LIBS

For backwards compatability
2017-11-09 05:36:20 -05:00
OtacilioNeto
cc94179393 Fix issue https://github.com/davisking/dlib/issues/925 (#928)
* This fix suggested by davisking make unit tests more reliable. Fix issue https://github.com/davisking/dlib/issues/925

* This fix suggested by davisking make unit tests more reliable. Fix issue https://github.com/davisking/dlib/issues/925
2017-11-07 14:27:54 -05:00
Davis King
809f5683d1 updated docs 2017-11-06 07:37:29 -05:00
Davis King
14acae38f9 Made unit tests more reliable 2017-11-05 10:36:48 -05:00
Davis King
31280b5474 merged 2017-11-05 08:14:10 -05:00
Davis King
7474abd741 Changed the mean squared loss layers to return a loss that's the MSE, not
0.5*MSE.  The only thing this effects is the logging messages that print during
training, which were confusing since the reported loss was half the size you
would expect.
2017-11-05 08:13:26 -05:00
Davis King
c171802dac Added notes about not using visual studio 2017 since it doesn't support C++11. 2017-11-05 07:57:34 -05:00
Davis King
0c3fe57573 Don't use CUDA/DNN stuff in Visual Studio 2017. 2017-11-05 07:37:52 -05:00
Davis King
978da26ed0 Fixed grammar in comment 2017-11-05 07:37:29 -05:00
Gilles Rochefort
ca9fceb278 Remove unused variable (#919) 2017-11-03 21:40:25 -04:00
Davis King
69ea3bd400 Fixed timing print() so the output scales are set correctly. 2017-11-02 10:24:25 -04:00
Davis King
50de3da992 Updated comments to reflect recent API changes. 2017-11-02 05:43:15 -04:00
Davis King
895d7874d3 Changed the timing code to use the C++11 high resolution clock and
atomics.  This makes the timing code a lot more precise.
2017-11-01 16:30:42 -04:00
Davis King
13442ec1b3 Fixed error in TIME_THIS(). It was still printing in seconds when it said
minutes in the output.
2017-11-01 16:25:32 -04:00
Davis King
858824e89d Upgraded the input layer so you can give input<std::array<matrix<T>,K>> types
as input layer specifications.  This will create input tensors with K
channels.
2017-10-31 17:01:47 -04:00
Davis King
6d5ad339c7 Made hamming_distance() a little more general. 2017-10-29 08:57:52 -04:00
Davis King
bc37789144 Made top level cmake file not build a shared library if part of a subproject. 2017-10-29 08:42:02 -04:00
Davis King
88d8b7c671 Made resizable_tensor objects not perform a reallocation if they are resized to
be smaller.  Instead, they now behave like std::vector in that they just change
their nominal size but keep the same memory, only reallocating if they are
resized to something larger than their underlying memory block.

This change makes some uses of dlib faster, in particular, running networks on
a large set of images of differing sizes will now run faster since there won't
be any GPU reallocations, which are notoriously slow.
2017-10-28 20:44:43 -04:00
Davis King
1b2cdf3c5b Suppress compiler warning 2017-10-28 20:35:59 -04:00
Davis King
ed8974284d Always compile the C++11 related unit tests. 2017-10-28 20:35:25 -04:00
Davis King
5806275c90 Added set_num_outputs() to fc_ layer. 2017-10-28 18:45:48 -04:00
Davis King
10b4f82568 Improved loss_mmod_ warning message. 2017-10-28 12:25:32 -04:00
Davis King
f93ee49cc2 Made requires clause a little more sensible. 2017-10-28 12:23:47 -04:00
Davis King
12bc559d28 Fixed a bug in dlib's MS Windows GUI code that was introduced a little while back
when we switched everything to std::shared_ptr.  Turns out std::shared_ptr has
some surprising limitations.  This change fixes a bug where the program crashes or hangs
sometimes during program shutdown.
2017-10-28 10:48:54 -04:00
Davis King
a6592ef60c Fixed sqlite include path finding. 2017-10-28 08:50:42 -04:00
Davis King
82ecd447c8 Don't ever try to use the busted version of libjpeg in anaconda. 2017-10-28 00:06:42 -04:00
Davis King
f88fd99ed4 Make cmake output less confusing 2017-10-27 23:42:40 -04:00
Davis King
c29314f34a Clarified build instructions 2017-10-27 23:39:04 -04:00
Davis King
1e877b1917 Changed graph construction for chinese_whispers() so that each face is always
included in the edge graph.  If it isn't then the output labels from
chinese_whispers would be missing faces in this degenerate case.  So basically this fixes a bug
where chinese_whispers(), when called from python, would sometimes return a labels array
that doesn't include labels for all the inputs.
2017-10-27 19:30:58 -04:00
Davis King
dc0245af05 Changed graph construction for chinese_whispers() so that each face is always
included in the edge graph.  If it isn't then the output labels from
chinese_whispers would be missing faces in this degenerate case.
2017-10-27 19:29:52 -04:00
Davis King
347257cbc0 Made random_cropper use cleaner and unbiased dlib::rand interface. 2017-10-27 07:27:57 -04:00
Davis King
faf8676a49 Cleanup 2017-10-27 05:58:34 -04:00
Davis King
e8d74b3015 Removed old cruft not needed anymore since we are doing this kind
of thing with cmake targets already in set_compiler_specific_options.cmake.
2017-10-27 05:50:15 -04:00
Sean Warren
16ad749c45 Win lapack (#913)
* Fall back on find_package for blas, lapack on Windows

* Remove debugging message
2017-10-27 05:45:53 -04:00
Davis King
aa93c8f861 Updated python code to use the new dlib::jitter_image() instead of hacking it
out of the random_cropper.
2017-10-25 05:42:31 -04:00
Davis King
e338bf02e0 Changed the random_cropper's set_min_object_size() routine to take min box
dimensions in the same format as the mmod_options object (i.e. two lengths
measured in pixels).  This should make defining random_cropping strategies that
are consistent with MMOD settings much more straightforward since you can just
take the mmod_options settings and give them to the random_cropper and it will
do the right thing.
2017-10-24 22:10:02 -04:00
Davis King
1c664eeac5 Made the metric learning example do image jittering. 2017-10-24 21:13:02 -04:00
Davis King
369f2b32e8 Cleaned up jitter_image() code and moved it into dlib proper. 2017-10-24 08:02:44 -04:00
Davis King
782f4f4825 merged 2017-10-23 21:02:57 -04:00
Sean Warren
45ce9f7b6d Use banded Cholesky factorization if possible (#857)
* Use banded Cholesky factorization if possible
Computation cost from n.n.n -> n.n.b where b is band size

* Tidy up whitespace

* Remove typo

*  Escape from banded matrix detection correctly

* Use LAPACK banded Cholesky factorisation where possible

* Add banded chol tests

* Add test for banded chol in column major layout

* Use row major layout for banded chol - more efficient as we will pass to LAPACK
2017-10-23 21:00:49 -04:00
Davis King
04a8f0093d Make sure the test loss the trainer logs to the console never gets suck at
infinity.
2017-10-22 16:06:11 -04:00
Gilles Rochefort
9bc7070a77 Add some operator() to cv_image for compatibility with mmod loss. (#900)
* Add some operator() to cv_image for compatibility with mmod.

* Update documentation
2017-10-21 10:16:36 -04:00
Davis King
261f12d4ea Updated spec 2017-10-20 21:45:14 -04:00
Gilles Rochefort
540f47409e Missing interfaces in add_prev for compatibility with mmod loss. (#901) 2017-10-20 21:44:00 -04:00
Davis King
6d343f93da Sometimes the loss_mmod_ layer could experience excessively long runtime during
early iterations since the model might produce a huge number of false alarms
while the detector is still bad.  Processing all these detections can cause it
to run slowly until the model is good enough to avoid really excessive amounts
of false alarms.  This change puts more of a limit on the number of false
alarms processed during those early iterations and avoids the slowdown.
2017-10-20 21:37:54 -04:00
Davis King
3e48a36ede The loss returned by compute_loss_value_and_gradient() wasn't quite right. It
doesn't effect normal use, but it's still wrong and this change fixes it.
2017-10-20 21:34:34 -04:00