Commit Graph

5816 Commits

Author SHA1 Message Date
Davis King
c118dcb676 Made resizable_tensor construction and assignment from matrices automatically
set the size of the tensor.
2016-06-10 11:00:53 -04:00
Davis E. King
087f9f1748 Merge pull request #133 from AbdealiJK/ajk/setup
setup.py: Provide instructions to install cmake
2016-06-09 20:55:52 -04:00
AbdealiJK
faff10f853 setup.py: Provide instructions to install cmake
If cmake_path is not found, either cmake is not installed
or cmake is not in the PATH. Hence, we also give instructions
on how to install cmake if the path is not found.
2016-06-09 11:37:53 +05:30
Davis King
24efbdbc52 fixed warning and typo in comment 2016-06-07 15:37:46 -04:00
Davis King
ffb4434240 Fixed missing return statement 2016-06-07 12:08:40 -04:00
Davis King
c772e4ae8a Made CMake's search for cuDNN a little more broad 2016-06-07 09:01:21 -04:00
Davis King
0a80fdd15c Made tests less likely to false alarm. 2016-06-07 06:27:24 -04:00
Davis King
2412a2570e Made this bit of code not look crazy 2016-06-06 21:18:09 -04:00
Davis King
992dcd48a6 merged 2016-06-05 16:39:35 -04:00
Davis King
aa19ff7377 Fixed solvers so they apply the bias specific multipliers to the correct parts
of the parameter vectors.
2016-06-05 16:39:10 -04:00
Davis E. King
02f6da2851 Merge pull request #130 from AbdealiJK/ajk/jpeg
save_jpeg: Use TRUE instead of true
2016-06-05 14:15:51 -04:00
AbdealiJK
c102fb2690 save_jpeg: Use TRUE instead of true
In some verisons on jpeg, TRUE is an enum, and so `true`
fails because it is not of the enum's type. Now, all the
libjpeg calls use TRUE/FALSE.

Fixes https://github.com/davisking/dlib/issues/129
2016-06-05 23:40:10 +05:30
Davis King
3c002d1cff Made the steps without progress counter reset immediately upon changing the
learning rate.
2016-06-05 07:45:15 -04:00
Davis King
72a2e8e437 made tests more repeatable 2016-06-01 06:51:46 -04:00
Davis King
b230e7c33d fixed compile time error 2016-05-31 12:40:48 -04:00
Davis King
ba59ddc6b5 Added subprocess_stream so that complex things can be isolated from MATLAB's
shenanigans in a separate process.
2016-05-31 12:37:25 -04:00
Davis King
0d2bce15ff Made the mex wrapper trap all std::exception derived exceptions rather than
just dlib exceptions.
2016-05-31 12:27:59 -04:00
Davis King
623fba97fe updated ignore list 2016-05-31 12:27:31 -04:00
Davis King
738b4d36af Made imglab show the name of the current image in the title bar. 2016-05-31 06:45:02 -04:00
Davis King
6e0f13ba06 minor cleanup 2016-05-30 13:14:04 -04:00
Davis King
b4b9376aab updated docs 2016-05-30 13:04:23 -04:00
Davis King
f698b85d68 clarified spec 2016-05-30 11:39:16 -04:00
Davis King
f1eae955ac fixed typo 2016-05-30 09:24:19 -04:00
Davis King
20d10efc65 A little more cleanup in the spec 2016-05-30 09:17:46 -04:00
Davis King
abd0019df0 fixed typo 2016-05-30 08:54:02 -04:00
Davis King
771ca2e0f3 clarified spec 2016-05-30 08:50:49 -04:00
Davis King
53e9c15811 Clarified some parts of the example. 2016-05-30 08:50:28 -04:00
Davis E. King
8c550d4c85 Merge pull request #114 from e-fominov/dnn_group_layer
Concat layer
2016-05-30 08:16:53 -04:00
Davis King
cbd37d56a6 Cleaned up the contracts a little. 2016-05-30 07:35:25 -04:00
Davis E. King
7a31806baa Merge pull request #125 from e-fominov/dnn_trainer_get_step
Added getter for trainer::train_one_step_calls
2016-05-30 07:31:17 -04:00
Fm
f06b265b34 Added getter for trainer::train_one_step_calls 2016-05-30 09:25:23 +03:00
Fm
01b3b08be6 Replaced sizeof... with variadic templates 2016-05-29 17:21:42 +03:00
Fm
1974e68d31 Removed friend declaration of dnn_tester from core.h 2016-05-27 14:49:11 +03:00
Fm
d32bcdfa3d Changed concat syntax into concat1, concat2..., made dtest more readable:: 2016-05-27 09:56:00 +03:00
Fm
2f7d3578d2 Added layer access and printing examples to inception sample 2016-05-26 19:40:10 +03:00
Evgeniy Fominov
290b1cb15b Fixed dnn_tester in GPU mode for cpu_tensor test 2016-05-26 18:26:08 +03:00
Fm
a06e533271 fixed cuda::copy_tensor 2016-05-26 17:51:44 +03:00
Fm
1f0318e222 depth_group replaced with concat layer 2016-05-26 17:43:54 +03:00
Fm
93e786db6c Merge branch 'master' of https://github.com/davisking/dlib into dnn_group_layer 2016-05-26 17:15:56 +03:00
Davis King
911638638d Made add_prev output a tensor with dimensions that are the max of each of the
dimensions of its inputs rather than always outputting a tensor that has the
dimensions of its immediate predecessors.
2016-05-25 19:12:36 -04:00
Davis King
b9332698fe updated example 2016-05-23 22:01:47 -04:00
Davis King
e5ad959085 Added bias learning rate and weight decay multipliers to bn_ layers 2016-05-23 22:01:37 -04:00
Davis King
b6b8379819 Relaxed the requirements for calling find_min_box_constrained() and
find_max_box_constrained().  Now the bounds can be empty for some variables.
2016-05-23 20:25:43 -04:00
Davis King
974743767f Changed code to avoid recreating thread_local cuda context objects. 2016-05-23 19:57:53 -04:00
Davis King
e55afabd1a fixed broken tests 2016-05-23 06:54:55 -04:00
Davis King
1cbf940eb3 Fixed a bug I introduced a minute ago. 2016-05-22 16:30:09 -04:00
Davis King
f189612876 Fixed a bug in visit_layer_parameter_gradients() and visit_layer_parameters()
caused by num_computational_layers being wrong when tax layers were placed as
the first layer.  These visit functions being wrong also caused multi-GPU
support to not work on such networks.
2016-05-22 16:14:10 -04:00
Davis King
d019e9cd08 Changed the trainer threading code to use dlib::thread_pool instead of
std::async() since std::async creates new threads with each invocation, which
in turn causes objects with thread_local storage duration to be reconstructed
each time.  This is problematic because CUDA context objects for cublas and
cudnn get reconstructed over and over, slowing things down and generally using
more resources than should be used.
2016-05-22 15:49:40 -04:00
Davis King
5e70b7a2c6 Cleaned up code a little and made the example use a better version of the architecture. 2016-05-22 13:17:10 -04:00
Davis King
b73dacc163 Fixing tests 2016-05-22 10:30:15 -04:00