Commit Graph

8086 Commits

Author SHA1 Message Date
Davis King
e58da5135c minor cleanup 2020-11-29 14:53:08 -05:00
Adrià Arrufat
d3b0213118
Add CIFAR-10 dataset loader (#2245)
* fix typos

* add cifar-10

* open files in binary mode

* print messages with file name only, like mnist loader

* some fixes

* add mnist.cpp to CMakeLists.txt

* fix test index

* do not use iterator in cast

* add cifar.cpp to all

* Add Davis' suggestions

* no need to use namespace std and clean up empty lines
2020-11-29 14:47:34 -05:00
pfeatherstone
d9e58d66cf
Fixes bug when (de)serializing vector<complex<float>> (#2244)
* [SERIALIZATION] fixed bug when (de)serializing vector<complex<float>>. DLIB_DEFINE... macro uses __out and __in variables names for ostream and istream objects respectively to avoid member variable name conflicts.

* Refactoring objects in DLIB_DEFINE_DEFAULT_SERIALIZATION to avoid name conflicts with user types

* Refactoring objects in DLIB_DEFINE_DEFAULT_SERIALIZATION to avoid name conflicts with user types

* removed tabs

* removed more tabs

Co-authored-by: pf <pf@pf-ubuntu-dev>
2020-11-24 22:09:38 -05:00
Adrià Arrufat
a7627cbd07
Rename function to disable_duplicative_biases (#2246)
* Rename function to disable_duplicative_biases

* rename also the functions in the tests... oops
2020-11-24 22:07:04 -05:00
Adrià Arrufat
b6bf8aefee
Add support for matrix serialization to python API (#2241)
* Add support for matrix serialization to python API

* add double to function names
2020-11-21 17:09:06 -05:00
Frankie Robertson
d7644ef2b7
Expose get_face_chip_details to Python (#2238) 2020-11-21 17:07:28 -05:00
Adrià Arrufat
96a75568be
fix unused parameter warning in visitor_net_to_xml (#2240) 2020-11-18 08:26:12 -05:00
Adrià Arrufat
820fd353d2
Make dnn_trainer print the minibatch size to ostream (#2236) 2020-11-16 22:17:29 -05:00
Adrià Arrufat
375f117222
Add custom ostream to console progress indicator (#2234) 2020-11-15 07:55:54 -05:00
Adrià Arrufat
2ef8e3ac14
Update to pybind11 v2.2.4 (closes #1806) (#2229)
* Update to  PyBind11 v2.2.4

* re-add custom changes

* fix indentation

* remove blank line
2020-11-12 22:39:15 -05:00
Davis King
93b992d790 slightly improve tests 2020-11-08 10:25:01 -05:00
Davis King
3f163bd433 Fix pixels being rounded to int values in some cases (#2228) 2020-11-08 10:22:40 -05:00
Adrià Arrufat
83921b390e
Remove an unused variable and old commented code (#2217) 2020-10-21 09:15:08 -04:00
Adrià Arrufat
3c82c2259c
Add Layer Normalization (#2213)
* wip: layer normalization on cpu

* wip: add cuda implementation, nor working yet

* wip: try to fix cuda implementation

* swap grid_strid_range and grid_strid_range_y: does not work yet

* fix CUDA implementation

* implement cuda gradient

* add documentation, move layer_norm, update bn_visitor

* add tests

* use stddev instead of variance in test (they are both 1, anyway)

* add test for means and invstds on CPU and CUDA

* rename visitor to disable_duplicative_bias

* handle more cases in the visitor_disable_input_bias

* Add tests for visitor_disable_input_bias
2020-10-20 07:56:55 -04:00
Adrià Arrufat
5074850356
fix backtracking when losses stay at inf (fixes #2206) (#2209)
* fix backtracking when losses stay at inf

* always backtrack when there is an inf value
2020-10-14 08:17:30 -04:00
Adrià Arrufat
a1f158379e
Do not use sqrt_2 in device code (fixes #2208) (#2210)
* do not use sqrt_2 in device code

* use CUDART_SQRT_2PI

* better sort includes
2020-10-10 08:42:10 -04:00
Adrià Arrufat
3ba004f875
Add GELU activation layer (#2204)
* Add GELU activation layer

* fix some copy-paste leftovers

* fix comment

* use exact faster implementation

* do not use cmath constants
2020-10-08 22:45:23 -04:00
Davis King
f4f8bff95e fix cmake not finding openblas 2020-10-08 22:34:03 -04:00
Adrià Arrufat
c45d166a25
Test cuda losses (#2199)
* add cuda test for loss_binary_log_per_pixel and some needed refactoring

* add cuda test for loss_multiclass_log_per_pixel

* forgot to add cpu version in loss

* remove a line I added by mistake

* fix typos

* declare label_to_ignore as static

* use tensor_index function instead of index method

* test cuda and cpu gradients values

* use DLIB_TEST instead of DLIB_CASSERT
2020-10-05 21:20:37 -04:00
Adrià Arrufat
d78d273a45
Add loss multiclass log per pixel weighted cuda (#2194)
* add cuda implementation for loss_multiclass_log_per_pixel_weighted

* add test for cuda and cpu implementations

* fix comment

* move weighted label to its own file

* Update path in doc

Co-authored-by: Davis E. King <davis685@gmail.com>
2020-09-30 08:04:28 -04:00
pfeatherstone
4125a7bb1f
DLIB (de)serialization : enhanced STL container support (#2185)
* [DLIB]  STL containers

* [DLIB]  STL containers

* [DLIB] applied code corrections suggested by code review

* [DLIB] applied code corrections suggested by code review

* [DLIB] applied code corrections suggested by code review
2020-09-25 08:27:30 -04:00
aviezab
5408b17f74
Linux Distro Detection to fix issue number #2159 #154 (#2169)
Check if the blas found by pkgconfig is valid before using it.
2020-09-25 07:48:48 -04:00
Davis King
0419b81689 Let python users give up to 35 parameters when using the global optimizer. 2020-09-25 07:41:57 -04:00
Sajied Shah Yousuf
e7c25c06df
Changed directory of license (#2189)
Add copy of license file to root to make github happy.
2020-09-24 19:21:34 -04:00
Davis King
20a1477209 update docs 2020-09-19 07:21:52 -04:00
pfeatherstone
ab346ddfa6
Extended proxy_(de)serialize objects to work with stringstream, ostringstream, istringstream and vector<char> (#2181)
* [DLIB] extended proxy objects to work with strinstream, istringstream, ostringstream and vector<char>

* [DLIB]  - use std::istream and std::ostream instead of std::istringstream, std::ostringstream and std::stringstream.
		- put back the filename member variable for better error messages

* [DLIB]  - review requirement

Co-authored-by: pf <pf@pf-ubuntu-dev>
2020-09-19 07:16:21 -04:00
Adrià Arrufat
fa818b9a96
use DLIB_CASSERT to avoid unused variable warning in release compilation (#2182) 2020-09-17 22:54:06 -04:00
pfeatherstone
d4fe74b5a8
vectorstream updates: added seekoff and seekpos (#2179)
* [DLIB] added seekpos and seekoff functions. These are necessary for functions in iostream base class to work properly. e.g. seekg. Note that in seekoff, you do NOT want to check the validity of read_pos after it has been updated. dlib::vectorstream and std::iostream work together to set EOF and/or badbit. Doing something like seekg(10000) should not throw even if the underlying buffer has 2 bytes. You should check if EOF is set and possibly call clear(). We have removed seekg from dlib::vectorstream as this adds confusion. Now std::iostream::seekg is called which somewhere down the callstack will call seekpos and/or seekoff. So there should be no diverging functionality between calling seekg on dlib::vectorstream& or std::iostream& when there is a cast.

* [DLIB] vectorstream unit test is updated to run identical tests on dlib::vectorstream& and std::iostream&

* [DLIB] only support read pointers and delete copy and move semantics

* [DLIB] explicit tests for seekg() in different directions

* [DLIB]  - no need to delete the move constructor and move assign operator. This is implicitly done by deleting the copy constructor and copy assign operator.

* [DLIB]  - remove leftover comments. no need
		- use more idiomatic notation

Co-authored-by: pf <pf@pf-ubuntu-dev>
2020-09-16 20:37:36 -04:00
Davis King
cdeb2e067c add some docs 2020-09-12 21:52:21 -04:00
pfeatherstone
12a82f6542
Macro for generating default serialisation functions (#2177)
* [DLIB] macro for generating default serialisation functions

* [DLIB]  refactoring

* [DLIB]  refactoring
2020-09-12 21:18:46 -04:00
Adrià Arrufat
9d60949a3a
Add scale_prev layer (#2171)
* Add scale_prev layer

* remove comment and fix gradient

* add test for scale_ and scale_prev_ layers
2020-09-12 07:55:24 -04:00
Adrià Arrufat
77e6255fdd
Add error message for mismatched tensor sizes in dnn_trainer (#2165) 2020-09-08 07:16:15 -04:00
Davis King
40c3e48818 Simplified more uses of layer visiting and fixed constness bug
The const bug was introduced yesterday and caused some layer visiting to
not work on const networks.
2020-09-06 10:42:56 -04:00
Adrià Arrufat
5ec60a91c4
Show how to use the new visitors with lambdas (#2162) 2020-09-06 09:27:50 -04:00
Davis King
393db2490b switch this to C++11 code 2020-09-06 08:57:44 -04:00
Davis King
5bcbe617eb make type_safe_union movable and also support holding movable types in a natural way. 2020-09-06 08:53:54 -04:00
Davis King
afe19fcb8b Made the DNN layer visiting routines more convenient.
Now the user doesn't have to supply a visitor capable of visiting all
layers, but instead just the ones they are interested in.  Also added
visit_computational_layers() and visit_computational_layers_range()
since those capture a very common use case more concisely than
visit_layers().  That is, users generally want to mess with the
computational layers specifically as those are the stateful layers.
2020-09-05 18:33:04 -04:00
Davis King
7dcc7b4ebc Added call_if_valid() 2020-09-05 17:47:31 -04:00
Adrià Arrufat
e7ec6b7777
Add visitor to remove bias from bn_ layer inputs (#closes 2155) (#2156)
* add visitor to remove bias from bn_ inputs (#closes 2155)

* remove unused parameter and make documentation more clear

* remove bias from bn_ layers too and use better name

* let the batch norm keep their bias, use even better name

* be more consistent with impl naming

* remove default constructor

* do not use method to prevent some errors

* add disable bias method to pertinent layers

* update dcgan example

- grammar
- print number of network parameters to be able to check bias is not allocated
- at the end, give feedback to the user about what the discriminator thinks about each generated sample

* fix fc_ logic

* add documentation

* add bias_is_disabled methods and update to_xml

* print use_bias=false when bias is disabled
2020-09-02 21:59:19 -04:00
Davis King
ed22f0400a Make dnn_trainer use robust statistic to determine if the loss is exploding and if it should backtrack.
Previously we used only the non-robust version, and so would mistakenly
not catch sequenes of loss increase that begin with an extremely large
value and then settled down to still large but less extreme values.
2020-09-02 21:48:30 -04:00
Davis King
0bb6ce36d8 dnn_trainer prints number of steps executed when print to ostream 2020-09-02 21:47:58 -04:00
Davis King
76cc8e3b6b Add probability_values_are_increasing() and probability_values_are_increasing_robust() 2020-09-02 21:42:44 -04:00
Davis King
c14ba4847e Rename POSIX macro to DLIB_POSIX to avoid name clashes with some libraries. 2020-09-01 09:30:52 -04:00
Davis King
4b92804dc2 Use the box with bounding box regression applied to do NMS in the loss. 2020-09-01 06:58:35 -04:00
Davis King
0e721e5cae Fix bug in bounding box regression loss. 2020-08-29 09:09:54 -04:00
Adrià Arrufat
c9809e067f
Add missing input/output mappings to mult_prev (#2154) 2020-08-28 23:04:24 -04:00
Davis King
b401185aa5 Fix a warning and add some more error handling. 2020-08-23 22:22:40 -04:00
Adrià Arrufat
dd06c1169b
loss multibinary log (#2141)
* add loss_multilabel_log

* add alias template for loss_multilabel_log

* add missing assert

* increment truth iterator

* rename loss to loss_multibinary_log

* rename loss to loss_multibinary_log

* explicitly capture dims in lambda
2020-08-23 22:15:16 -04:00
Juha Reunanen
d7ca478b79
Problem: With certain batch size / device count combinations, batches were generated with size = 1, causing problems when using batch normalization. (#2152)
Solution: Divide the mini-batch more uniformly across the different devices.
2020-08-20 07:43:14 -04:00
Davis King
bea99ceed0 switch to a name less likely to conflict with third party code 2020-08-19 19:48:14 -04:00