Commit Graph

7984 Commits

Author SHA1 Message Date
Adrià Arrufat
76ebab4b91 fix compilation with double features 2021-11-02 13:22:54 +01:00
Adrià Arrufat
9a50809ebf
Use double instead of float for extracted features
Co-authored-by: Davis E. King <davis@dlib.net>
2021-11-02 13:07:14 +01:00
Adrià Arrufat
29e5319ee9
Mention about find_max_global()
Co-authored-by: Davis E. King <davis@dlib.net>
2021-11-02 13:06:45 +01:00
Adrià Arrufat
c8c810f221 Replace fc classifier with svm_multiclass_linear_trainer 2021-11-02 02:48:18 +09:00
Davis King
a41b3d7ce8 We have some excessive and duplicative tests in the travis-ci setup.
This is causing us to run out of travis-ci credits, making tests not run
at all.  I deleted the duplicative tests and then disabled two additonal
ones by commenting them out that would be nice to run but I think are
not essential.  In particular, the OSX one eats up a ton of credits.  So
I disabled that.  Maybe we can turn it back on later if we end up well
under the credit budget (or switch to github actions which appears to
have higher limits)
2021-10-30 09:47:42 -04:00
Adrià Arrufat
2e8bac1915
Add dnn self supervised learning example (#2434)
* wip: loss goes down when training without a dnn_trainer

if I use a dnn_trainer, it segfaults (also with bigger batch sizes...)

* remove commented code

* fix gradient computation (hopefully)

* fix loss computation

* fix crash in input_rgb_image_pair::to_tensor

* fix alias tensor offset

* refactor loss and input layers and complete the example

* add more data augmentation

* add documentation

* add documentation

* small fix in the gradient computation and reuse terms

* fix warning in comment

* use tensor_tools instead of matrix to compute the gradients

* complete the example program

* add support for mult-gpu

* Update dlib/dnn/input_abstract.h

* Update dlib/dnn/input_abstract.h

* Update dlib/dnn/loss_abstract.h

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* [TYPE_SAFE_UNION] upgrade (#2443)

* [TYPE_SAFE_UNION] upgrade

* MSVC doesn't like keyword not

* MSVC doesn't like keyword and

* added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types

* - didn't need is_void anymore
- added result_of_t
- didn't really need ostream_helper or istream_helper
- split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)

* - updated abstract file

* - added get_type_t
- removed deserialize_helper dupplicate
- don't use std::decay_t, that's c++14

* - removed white spaces
- don't need a return-statement when calling apply_to_contents_impl()
- use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum

* - added type_safe_union_size
- added type_safe_union_size_v if C++14 is available
- added tests for above

* - test type_safe_union_size_v

* testing nested unions with visitors.

* re-added comment

* added index() in abstract file

* - refactored reset() to clear()
- added comment about clear() in abstract file
- in deserialize(), only reset the object if necessary

* - removed unecessary comment about exceptions
- removed unecessary // -------------
- struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
- get_type and get_type_t are private. Client code shouldn't need this.
- shuffled some functions around
- type_safe_union_size and type_safe_union_size_v are removed. not needed
- reset() -> clear()
- bug fix in deserialize() index counts from 1, not 0
- improved the abstract file

* refactored index() to get_current_type_id() as per suggestion

* maybe slightly improved docs

* - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
- apply_to_contents() now always calls visit()

* example with private visitor using friendship with non-void return types.

* Fix up contracts

It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
- helper_copy -> helper_forward
- added validate_type<T> in a couple of places

* - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
- use enable_if<is_valid<T>> in favor of validate_type<T>()

* - use enable_if<is_valid<T>> in favor of validate_type<T>()

* - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust

Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>

* Just minor cleanup of docs and renamed some stuff, tweaked formatting.

* fix spelling error

* fix most vexing parse error

Co-authored-by: Davis E. King <davis@dlib.net>
Co-authored-by: pfeatherstone <45853521+pfeatherstone@users.noreply.github.com>
Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
2021-10-29 22:26:38 -04:00
pfeatherstone
f323d1824c
simplification using C++11 types (#2446)
Co-authored-by: pfeatherstone <peter@me>
2021-10-29 07:43:15 -04:00
Davis King
d29a8fc0c3 Just minor cleanup of docs and renamed some stuff, tweaked formatting. 2021-10-28 08:36:32 -04:00
pfeatherstone
2b8f9e401a
[TYPE_SAFE_UNION] upgrade (#2443)
* [TYPE_SAFE_UNION] upgrade

* MSVC doesn't like keyword not

* MSVC doesn't like keyword and

* added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types

* - didn't need is_void anymore
- added result_of_t
- didn't really need ostream_helper or istream_helper
- split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)

* - updated abstract file

* - added get_type_t
- removed deserialize_helper dupplicate
- don't use std::decay_t, that's c++14

* - removed white spaces
- don't need a return-statement when calling apply_to_contents_impl()
- use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum

* - added type_safe_union_size
- added type_safe_union_size_v if C++14 is available
- added tests for above

* - test type_safe_union_size_v

* testing nested unions with visitors.

* re-added comment

* added index() in abstract file

* - refactored reset() to clear()
- added comment about clear() in abstract file
- in deserialize(), only reset the object if necessary

* - removed unecessary comment about exceptions
- removed unecessary // -------------
- struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
- get_type and get_type_t are private. Client code shouldn't need this.
- shuffled some functions around
- type_safe_union_size and type_safe_union_size_v are removed. not needed
- reset() -> clear()
- bug fix in deserialize() index counts from 1, not 0
- improved the abstract file

* refactored index() to get_current_type_id() as per suggestion

* maybe slightly improved docs

* - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
- apply_to_contents() now always calls visit()

* example with private visitor using friendship with non-void return types.

* Fix up contracts

It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
- helper_copy -> helper_forward
- added validate_type<T> in a couple of places

* - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
- use enable_if<is_valid<T>> in favor of validate_type<T>()

* - use enable_if<is_valid<T>> in favor of validate_type<T>()

* - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust

Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
2021-10-28 08:34:57 -04:00
Adrià Arrufat
bf4100069f
Fix dnn_trainer trying to decrease the learning rate (#2442) 2021-10-13 06:59:29 -04:00
Adrià Arrufat
adca7472df
Add support for fused convolutions (#2294)
* add helper methods to implement fused convolutions

* fix grammar

* add method to disable affine layer and updated serialization

* add documentation for .disable()

* add fuse_convolutions visitor and documentation

* update docs: net is not constant

* fix xml formatting and use std::boolalpha

* fix warning and updated net requirement for visitor

* fix segfault in fuse_convolutions visitor

* copy unconditionally

* make the visitor class a friend of the con_ class

* setup the biases alias tensor after enabling bias

* simplify visitor a bit

* fix comment

* setup the biases size, somehow this got lost

* copy the parameters before resizing

* remove enable_bias() method, since the visitor is now a friend

* Revert "remove enable_bias() method, since the visitor is now a friend"

This reverts commit 35b92b1631.

* update the visitor to remove the friend requirement

* improve behavior of enable_bias

* better describe the behavior of enable_bias

* wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias

* wip: fix cpu compilation

* WIP: not working fused ReLU

* WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work)

* WIP: more general set of 4d tensor (still not working)

* fused convolutions seem to be working now, more testing needed

* move visitor to the bottom of the file

* fix CPU-side and code clean up

* Do not try to fuse the activation layers

Fusing the activation layers in one cuDNN call is only supported when using
the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising
behavior. So, let's just fuse the batch norm and the convolution into one
cuDNN call using the IDENTITY activation function.

* Set the correct forward algorithm for the identity activation

Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward

* move the affine alias template to its original position

* wip

* remove unused param in relu and simplify example (I will delete it before merge)

* simplify conv bias logic and fix deserialization issue

* fix enabling bias on convolutions

* remove test example

* fix typo

* update documentation

* update documentation

* remove ccache leftovers from CMakeLists.txt

* Re-add new line

* fix enable/disable bias on unallocated networks

* update comment to mention cudnnConvolutionBiasActivationForward

* fix typo

Co-authored-by: Davis E. King <davis@dlib.net>

* Apply documentation suggestions from code review

Co-authored-by: Davis E. King <davis@dlib.net>

* update affine docs to talk in terms of gamma and beta

* simplify tensor_conv interface

* fix tensor_conv operator() with biases

* add fuse_layers test

* add an example on how to use the fuse_layers function

* fix typo

Co-authored-by: Davis E. King <davis@dlib.net>
2021-10-11 10:48:56 -04:00
Adrià Arrufat
8a2c744207
Fix trainer with unsupervised loss (#2436)
* Don't try to use labels in unsupervised losses

I hope that is the right way of fixing this...

* fix it by duplicating most code in send_job (works on my machine)

I will probably need to find a way to reuse the code

* try to fix it reusing the code... not sure though

* Revert "try to fix it reusing the code... not sure though"

This reverts commit f308cac6df.

* check the type of the training label to fix the issue instead
2021-09-27 07:47:04 -04:00
Davis King
b9f04fdc45 Fix error in build-and-test.sh script 2021-09-25 10:58:04 -04:00
Davis King
cd6080ca83 fix spelling error in comment 2021-09-25 10:51:42 -04:00
Davis E. King
7b7de0f643
Oops, use correct URI for travis 2021-09-23 19:46:21 -04:00
Davis E. King
505e6ed5f0
update travis uri 2021-09-23 19:45:00 -04:00
Davis King
e1aa34477a Added mpc option to say you only care about the first time we get to the target 2021-09-23 12:25:58 -04:00
Jakub Mareda
960e8a014f
Missing include for dlib::loss_multiclass_log_per_pixel_ (#2432)
* Missing include for `dlib::loss_multiclass_log_per_pixel_::label_to_ignore`

I was trying to compile the examples and encountered this issue after moving `rgb_label_image_to_index_label_image` to cpp file. Headers should include all symbols they mention.

* Update pascal_voc_2012.h

Should use the official entrypoint for including dnn stuff.

Co-authored-by: Davis E. King <davis685@gmail.com>
2021-09-15 08:27:24 -04:00
Adrià Arrufat
adea4e603a
Allow setting custom cuda compute capabilities (#2431)
* add more cuda capabilities

* Allow setting custom cuda capabilities

* improve default behavior

* rename to compute capabilities
2021-09-13 08:17:56 -04:00
Lê Xuân Tuấn Anh
4506979609
add intel mkl search path (#2428) 2021-09-10 08:03:31 -04:00
Adrià Arrufat
e0a5725f22
Handle tag layers as inputs to disable_duplicative_biases (#2416) 2021-08-19 08:02:10 -04:00
Adrià Arrufat
5bd837d13f
Show number of parameters if net is allocated (#2417) 2021-08-19 08:01:54 -04:00
Adrià Arrufat
4d7f88bbc3
Delete .hgtags (#2414) 2021-08-15 09:35:28 -04:00
Davis King
0a5d5a2c68 updated old message to not say we use mercurial still 2021-08-14 10:42:11 -04:00
Davis King
7750a6348f produce cleaner python source distribution tarballs 2021-08-14 10:03:38 -04:00
Adrià Arrufat
fe0957303f
Add progress information to console_progress_indicator (#2411)
* add progress information (current/total and percent)

* print a new line instead of overwritting with spaces

* check if target_val is an integer with std::trunc
2021-08-06 07:32:53 -04:00
Adrià Arrufat
74653b4f26
Add function to compute string dimensions in pixels (#2408)
* add function to compute string dimensions in pixels

* use custom struct as a return value, remove first and last params

* Update dlib/image_transforms/draw_abstract.h

Co-authored-by: Davis E. King <davis@dlib.net>
2021-08-05 19:24:12 -04:00
Adrià Arrufat
cd915b037d update pngligconf.h 2021-08-05 19:21:13 -04:00
Adrià Arrufat
f019b7adcf Minor changes to avoid conflicts and warnings in visual studio. 2021-08-05 19:21:13 -04:00
Davis King
ca3a0fdd5e normalized line endings so visual studio won't complain. 2021-08-05 19:21:13 -04:00
Davis King
fdf6902ade Another minor thing to avoid warnings from visual studio. 2021-08-05 19:21:13 -04:00
Davis King
04816ec0fb Added missing #include (needed only to avoid gcc warnings) 2021-08-05 19:21:13 -04:00
Adrià Arrufat
11101b6f4b update libpng to version 1.6.37 2021-08-05 19:21:13 -04:00
Adrià Arrufat
23e506323a update zlib to version 1.2.11 2021-08-05 19:21:13 -04:00
Adrià Arrufat
bec25d8247
Fix running gradient crashing sometimes (#2401) 2021-08-04 06:59:42 -04:00
Adrià Arrufat
16500906b0
YOLO loss (#2376) 2021-07-29 20:05:54 -04:00
Adrià Arrufat
951fdd0092
return the projective transform in extract_image_4points (#2395) 2021-07-26 20:46:36 -04:00
Adrià Arrufat
b850f0e524
Add LayerNorm documentation (#2393) 2021-07-22 08:00:55 -04:00
Davis King
e64ea42f6f remove dead code 2021-07-15 22:29:27 -04:00
frostbane
7d8c6a1141
Fix cannot compile iso only code (#579) (#2384)
* Fix cannot compile iso only code (#579)

also fixing (#1742)

* Remove GUI dependency from fonts (#2273)
2021-06-30 06:43:43 -04:00
Adrià Arrufat
973de8ac73
Fix disable_duplicative_biases when the input is a skip layer (#2367)
* Fix disable_duplicative_biases when the input is a skip layer

* fix template parameters
2021-05-12 07:05:44 -04:00
Adrià Arrufat
4a51017c2e
Make Travis read the CXXFLAGS enviroment variable (#2366)
* try to make sure travis uses C++17

* fix unbound variable

* Update dlib/travis/build-and-test.sh

Co-authored-by: Davis E. King <davis@dlib.net>
2021-05-11 20:08:49 -04:00
Adrià Arrufat
b99bec580b
Fix serialize variant with C++17 (#2365)
* Fix serialize variant with C++17

* fix order of parameters
2021-05-11 08:00:02 -04:00
pfeatherstone
9697fa5de2
[SERIALIZATION] support for std::optional (#2364)
* added support for std::optional if using C++

* oops, bug fix + check if item already holds a type

* oops, another bug fix

* remove warnings about unused parameters

Co-authored-by: pf <pf@me>
2021-05-11 07:56:34 -04:00
pfeatherstone
11212a94b4
[SERIALIZATION] added support for std::variant (#2362)
* [SERIALIZATION] addes support for std::variant

* [SERIALIZATION] bug fix + added tests

* support immutable types

* put an immutable type in std::variant

Co-authored-by: pf <pf@me>
2021-05-10 09:04:29 -04:00
Davis King
a54507d81b suppress spurious warning 2021-05-01 17:32:53 -04:00
Davis King
273d59435f fix comment formatting 2021-05-01 17:04:59 -04:00
Davis King
cd17f324eb fix warnings about possible use of uninitialized values 2021-05-01 17:04:36 -04:00
Davis King
1de47514bd Make input_layer() work with networks that contain repeat layers.
Do this by just making all layers have a .input_layer() method, which in
that context can be implemented in a simple manner.
2021-05-01 14:46:47 -04:00
Davis King
ded68b9af7 Cleanup gcc version checking code a little.
Also fix this error from cmake 3.5.1:

```
CMake Error at CMakeLists.txt:62 (if):
  if given arguments:

    "CMAKE_COMPILER_IS_GNUCXX" "AND" "CMAKE_CXX_COMPILER_VERSION" "VERSION_LESS_EQUAL" "4.8.5"

  Unknown arguments specified
```
2021-04-28 08:05:22 -04:00