Davis King
373bc75f45
Added cpp11 tag to the docs and also updated them to include the new
...
running_gradient object.
2016-01-17 12:45:00 -05:00
Davis King
732ddefdd2
Removed link to dlib/all/source.cpp since it seems to confuse some users.
2016-01-17 12:17:58 -05:00
Davis King
eee0d295c3
Improved error message you get about linking to libjpeg and libpng
...
if you try to open a jpeg or png file.
2016-01-17 12:06:53 -05:00
Davis King
da6e48071c
Added some preprocessor stuff to check if the user is #including
...
dlib/all/source.cpp into their own .cpp files. If so they will get a useful
error message.
2016-01-17 11:54:31 -05:00
Davis King
12d9d257f2
Put guards around some GCC specific #pragma statements to avoid warnings in
...
visual studio.
2016-01-16 09:01:22 -05:00
Davis E. King
e1ff23fdb5
Merge pull request #68 from yukoba/patch-1
...
sys.stdout.encoding instead of latin-1 in setup.py
2016-01-14 18:56:27 -05:00
Davis E. King
80e6443d83
Merge pull request #69 from severin-lemaignan/auto-ptr-guards
...
Add pragma guards around deprecated auto_ptr to prevent GCC warnings
2016-01-14 18:47:23 -05:00
Séverin Lemaignan
d873810ee4
Add pragma guards around deprecated auto_ptr to prevent GCC warnings
...
Fixes #67
2016-01-14 13:18:15 +00:00
Yu Kobayashi
d35104ed3c
sys.stdout.encoding instead of latin-1 in setup.py
...
Please use sys.stdout.encoding instead of latin-1 in setup.py.
This is necessary for non English OS.
2016-01-14 11:07:18 +09:00
Davis King
55748d93c9
Made train_one_step() print stuff in verbose mode.
2016-01-11 20:38:04 -05:00
Davis King
6bd5c2e395
Made cmake use the built in find X11 scripts by default on OS X.
2016-01-10 18:31:49 -05:00
Davis King
4b2178c6e6
Made trainer disk synchronization more reliable and efficient.
2016-01-09 11:57:04 -05:00
Davis King
08f965a32b
Clarified spec
2016-01-09 11:56:37 -05:00
Davis King
6841222120
Improved the dnn_trainer. In particular, it no longer makes a copy of the
...
network (which would needlessly double VRAM usage). I also added a
set_synchronization_file() method so you can tell it to automatically
synchronize itself to disk every so often during training. This makes resuming
an interrupted training session trivially easy.
2016-01-09 11:50:12 -05:00
Davis King
4189386ddb
Increased the default sgd learning rate.
2016-01-09 09:39:07 -05:00
Davis King
9f92b082a3
Now training will automatically reduce the learning rate when it is clear that
...
the loss isn't being reduced. Also, there is a stopping condition now based on
how large the current learning rate is. That is, training stops when the learning
rate gets small enough and it is clear that no progress is being made.
2016-01-09 09:37:00 -05:00
Davis King
6f63bc6279
saving comments
2016-01-09 08:16:33 -05:00
Davis King
537da11f38
merged
2016-01-08 18:25:13 -05:00
Davis King
63734971a0
Fixed compile time error I just introduced
2016-01-08 18:23:58 -05:00
Davis King
f47620c11f
merged
2016-01-08 18:16:34 -05:00
Davis King
ab2cd12915
Made running_gradient serializable.
2016-01-08 18:15:15 -05:00
Davis King
f4f8e4db72
merged
2016-01-08 07:48:41 -05:00
Davis King
5435c56d8c
merged
2016-01-07 18:25:45 -05:00
Davis King
d73f58ae1c
Added running_gradient
2016-01-07 18:24:49 -05:00
Davis King
368d6d19ca
Added CPU version of pooling layer code.
2016-01-04 17:58:00 -05:00
Davis King
2639a5233e
Improved outputs from test_layer().
2016-01-04 17:55:59 -05:00
Davis King
fb49f0ceab
Fixed a bug where the trainer didn't initialize the solvers unless you
...
explicitly gave it a solver.
2016-01-03 12:03:00 -05:00
Davis King
cbdeb1608f
Made add() faster by calling my own version for the simple pointwise add case.
2016-01-03 11:44:54 -05:00
Davis King
30005b7ee3
Wrapped new dot() function into the tt namespace and gave it a CPU version.
2016-01-03 11:21:40 -05:00
Davis King
d248a22571
Added the launch_kernel() function that launches a kernel by smartly picking
...
the number of threads and blocks rather than using the hard coded numbers I had
in there. This makes some functions noticeably faster.
Also added a dot() function that is fully asynchronous.
2016-01-03 11:20:49 -05:00
Davis King
6a64180200
Minor cleanup
2016-01-02 17:16:42 -05:00
Davis King
8466d33233
Made the tensor dot() function use cuBLAS.
2016-01-01 21:50:50 -05:00
Davis King
8424083e52
Fixed more serialization bugs
2015-12-31 22:58:20 -05:00
Davis King
0b235fe537
Added the repeat layer and generally optimized the code for really deep
...
networks. This revolved mostly around removing really deep template recursions
since that upsets the compiler when you make really deep networks.
2015-12-31 22:23:14 -05:00
Davis King
7991275e4e
Fixed a bug in the max_pool serialization functions.
2015-12-31 22:19:13 -05:00
Davis King
c48a6af814
Added a way to get the final gradient with respect to the inputs. Also added a
...
method to more efficiently give the input gradient in some instances.
2015-12-30 20:39:12 -05:00
Davis King
3597df5eb6
Made add_layer hold subnetworks though a pointer so that most of a
...
network is allocated on the heap rather than resulting in really large
stack usage for large networks.
2015-12-30 20:32:26 -05:00
Davis King
72b250bb16
Clarified spec
2015-12-30 20:30:32 -05:00
Davis King
667b60dbb1
Added the add_prev_ layer
2015-12-24 11:30:16 -05:00
Davis King
fb2fa0f7ca
Added another add() function for adding tensors. This one lets you add
...
tensors with different sizes and it will zero pad them as needed.
2015-12-24 10:44:37 -05:00
Davis King
ca77640492
Added pack_idx() and unpack_idx().
2015-12-24 10:40:53 -05:00
Davis King
b66c5254ba
Made the tuple based layer constructors work with nested tuples so you can
...
define combination layers made out of other combination layers without being
hassled by the compiler.
2015-12-24 09:23:22 -05:00
Davis King
d2516bc2f7
Just renamed two functions to way better names.
2015-12-23 22:29:31 -05:00
Davis King
1f5aa6c1fa
Added an option to fc_ to enable or disable a bias term.
2015-12-23 22:25:17 -05:00
Davis King
8837698043
Added an avg_pool_ layer. Also fixed some errors in the layer specs.
2015-12-23 21:44:21 -05:00
Davis King
5875fa75ca
Change to suppress compiler warning.
2015-12-23 21:31:35 -05:00
Davis King
c627898eee
Fixed the tag and skip layers so they compile now that we have the
...
in-place/out-of-place logic present.
2015-12-23 20:58:31 -05:00
Davis King
7bb7f8a288
Clarified spec
2015-12-23 20:18:04 -05:00
Davis King
18695b7b4b
Made the default input layer automatically normalize unsigned char pixel values
...
to the range [0,1].
2015-12-23 08:23:46 -05:00
Davis King
09564840a1
Reverted cmake file back to it's proper state. Oops.
2015-12-23 08:05:08 -05:00