Commit Graph

606 Commits

Author SHA1 Message Date
pfeatherstone
dc94754607
Fix for #2729 (#2731)
* fixes #2729

* don't commit vscode stuff

* Update ffmpeg_utils.h

typo

---------

Co-authored-by: pf <pf@me>
2023-02-20 20:01:13 -05:00
Adrià Arrufat
1958da78da
Rename ffmpeg examples (#2727) 2023-02-10 22:03:25 -05:00
pfeatherstone
50b33753bb
Strip binaries in release mode (#2721)
* - use add_executable directly
- use target_compile_definitions()
- strip binaries in release mode

* Added a comment

---------

Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
2023-02-03 17:43:00 -05:00
pfeatherstone
9d3ba472dd
FFMPEG wrappers: dlib::ffmpeg::decoder and dlib::ffmpeg::demuxer (#2707)
* - added ffmpeg stuff to cmake

* - added observer_ptr

* ffmpeg utils

* WIP

* - added ffmpeg_decoder

* config file for test data

* another test file

* install ffmpeg

* added ffmpeg_demuxer

* install all ffmpeg libraries

* support older version of ffmpeg

* simplified loop

* - test converting to dlib object
- added docs
- support older ffmpeg

* added convert() overload

* added comment

* only register stuff when API not deprecated

* - fixed version issues
- fixed decoding

* added tests for ffmpeg_demuxer

* removed unused code

* test GIF

* added docs

* added audio test

* test for audio

* more tests

* review changes

* don't need observer_ptr

* made deps public. I could be wrong but just in case.

* - added some static asserts. Some areas of the code might do memcpy's on arrays of pixels. This requires the structures to be packed. Check this.
- added convert() functions
- changed default decoder options. By default, always decode to RGB and S16 audio
- added convenience constructor to demuxer

* - no longer need opencv

* oops. I let that slip

* - made a few functions public
- more precise requires clauses

* enhanced example

* - avoid FFMPEG_INITIALIZED being optimized away at link time
- added decoding example

* - avoid -Wunused-parameter error

* constexpr and noexcept correctness. This probably makes no difference to performance, BUT, it's what the core guidelines tell you to do. It does however demonstrate how complicated and unecessarily verbose C++ is becoming. Sigh, maybe one day i'll make the switch to something that doesn't make my eyes twitch.

* - simplified metadata structure

* hopefully more educational

* added another example

* ditto

* typo

* screen grab example

* whoops

* avoid -Wunused-parameter errors

* ditto

* - added methods to av_dict
- print the demuxer format options that were not used
- enhanced webcam_face_pose_ex.cpp so you can set webcam options

* if height and width are specified, attempt to set video_size in format_options. Otherwise set the bilinear resizer.

* updated docs

* once again, the ffmpeg APIs do a lot for you. It's a matter of knowing which APIs  to call.

* made header-only

* - some Werror thing

* don't use type_safe_union

* - templated sample type
- reverted deep copy of AVFrame for frame copy constructor

* - added is_pixel_type and is_pixel_check

* unit tests for pixel traits

* enhanced is_image_type type trait and added is_image_check

* added unit tests for is_image_type

* added pix_traits, improved convert() functions

* bug fix

* get rid of -Werror=unused-variable error

* added a type alias

* that's the last of the manual memcpys gone. We'using ffmpeg API everywhere now for copying frames to buffers and back

* missing doc

* set framerate for webcam

* list input devices

* oops. I was trying to make ffmpeg 5 happy but i've given up on ffmpeg v5 compatibility in this PR. Future PR.

* enhanced the information provided by list_input_devices and list_output_devices

* removed vscode settings.json file

* - added a type trait for checking whether a type is complete. This is useful for writing type traits that check other types have type trait specializations. But also other useful things. For example, std::unique_ptr uses something similar to this.

* Davis was keen to simply check pixel_traits is specialised. That's equivalent to checking pixel_traits<> is complete for some type

* code review

* juse use the void_t in dlib/type_traits.h

* one liners

* just need is_image_check

* more tests for is_image_type

* i think this is correct

* removed printf

* better docs

* Keep opencv out of it

* keep old face pose example, then add new one which uses dlib's ffmpeg wrappers

* revert

* revert

* better docs

* better docs

---------

Co-authored-by: pf <pf@me>
2023-01-29 20:17:34 -05:00
Davis King
a12824d425 update example to refer to newer dlib version 2022-12-15 22:15:18 -05:00
Adria Arrufat
e5b2cedff8
Improve the data augmentation in the SSL example (#2684)
I was using the data augmentation recommended for the ImageNet dataset, which is not well suited
for CIFAR-10.
After doing so, the test accuracy increased by 1 point.
2022-11-09 22:07:00 -05:00
Adria Arrufat
bdb1089ae6
Fix computation of the Barlow Twins loss gradient (#2680) 2022-11-02 07:55:58 -04:00
Adria Arrufat
7f06f6e185
Fix empirical cross-correlation computation in the SSL example (#2679)
I was using the normalized features za as both matrices, instead of za and zb.
I noticed this because the empirical cross-correlation matrix was symmetrical,
which is not supposed to.  It does not affect anything, as it was computed
properly in the loss.
2022-10-31 19:52:24 -04:00
Adrià Arrufat
bf273a8c2e
Add multiclass SVM trainer to svm/auto.h (#2642)
* Add multiclass SVM trainer to svm/auto.h

* Use a matrix<double> and add an overload for matrix<float>

* Replace typedef with using and use normalizer from normalized_function

* Remove extra ;

* use better names for decision function variables

* fix comments format and grammar

* remove unneeded includes

* Update dlib/svm/auto_abstract.h

* Update the assert to use 3 samples (as there is 3 fold CV)

* Remove unneeded captures in lambda

* Update dlib/svm/auto_abstract.h

* Update dlib/svm/auto_abstract.h

Co-authored-by: Davis E. King <davis685@gmail.com>
2022-08-17 19:29:04 -04:00
Adrià Arrufat
83ec371f12
Use only a fraction of labels for the multiclass SVM in SSL example (#2641)
* Use only a fraction of labels for the multiclass SVM in SSL example

This change makes the self-supervised example more close to reality:
usually, only a fraction of the dataset is labeled, but we can harness
the whole dataset by using a self-supervised method and then train the
classifier using the fraction of labeled data.

Using 10% of the labels results in a test accuracy of 87%, compared to
the 89% we got when training the multiclass SVM with all labels.

I just added an option to change the fraction of labeled data, so that
users can experiment with it.

* Update examples/dnn_self_supervised_learning_ex.cpp

Co-authored-by: Davis E. King <davis685@gmail.com>
2022-08-14 08:27:49 -04:00
Adrià Arrufat
69665eb0f7
Modernize rounding and cast statements (#2633)
* Use add_compile_definitions, enable -Wpedantic and use colors

* Use lround in rectangle and drectangle

* Use round in bigint

* Use round in canvas_drawing

* Modernize image_transforms

* Modernize image_pyramid

* Fix error in image_pyramid

* Modernize matrix

* Fix error in image_pyramid again

* Modernize fhog test

* Modernize image_keypoint/surf

* Remove extra ;
2022-08-04 18:36:12 -04:00
Davis King
29288e5d89 Make C++14 and CMake 3.8.0 the new minimum required versions 2022-07-31 17:45:18 -04:00
Adrià Arrufat
ad06471a15
Fix typo in the self-supervised learning example (#2623) 2022-07-13 18:54:10 -04:00
Adrià Arrufat
a76f205bf6
Add webp support (#2565)
* Add BGR(A) to pixel_traits

* add support for reading webp

* Apply Davis' suggestions and fix formatting

* Fix signed/unsigned warning

* Update decoding paths

* update pixel traits documentation

* Add support for writing WebP images

* Simplify image_saver code

* WIP: add tests, PSNR is low but images look good

* Add lossless compression for quality > 100

* Fix build when WebP support is disabled

* Use C++ stream instead of C-style FILE

* Fix indentation

* Use reinterpret_cast instead of C-style cast

* Improve impl::impl_save_webp signature

* Remove empty line

* Use switch statement and clean up code

* Update Copyright and test libwebp on Linux

* Fix formatting in github workflow

* Fix operator== for bgr_alpha_pixel

* Show where the test fails

* Add libwebp to CI for the remaining Linux workflows

* Use filename consistently

* Improve message with wrong pixel type

* Fix tests for WebP images

* Prevent saving images which are too large and improve error messages

* Use max dimension from WebP header directly

* Update documentation, index and release notes

* Update dlib/image_saver/save_webp_abstract.h

Co-authored-by: Martin T. H. Sandsmark <martin.sandsmark@kde.org>
Co-authored-by: Davis E. King <davis685@gmail.com>
2022-04-19 07:52:12 -04:00
Juha Reunanen
0aa8b4cbfc
Treat warnings as errors (#2490) 2022-04-12 18:53:52 -04:00
Adrià Arrufat
50b78da53a
Fix Barlow Twins loss gradient (#2518)
* Fix Barlow Twins loss gradient

* Update reference test accuracy after fix

* Round the empirical cross-correlation matrix

Just a tiny modification that allows the values to actually reach 255 (perfect white).
2022-02-21 08:33:21 -05:00
Adrià Arrufat
e1ac0b43e4
normalize samples for SVM classifier (#2460) 2021-11-17 08:14:39 -05:00
Adrià Arrufat
5091e9c880
Replace sgd-based fc classifier with svm_multiclass_linear_trainer (#2452)
* Replace fc classifier with svm_multiclass_linear_trainer

* Mention about find_max_global()

Co-authored-by: Davis E. King <davis@dlib.net>

* Use double instead of float for extracted features

Co-authored-by: Davis E. King <davis@dlib.net>

* fix compilation with double features

* Revert "fix compilation with double features"

This reverts commit 76ebab4b91.

* Revert "Use double instead of float for extracted features"

This reverts commit 9a50809ebf.

* Find best C using global optimization

Co-authored-by: Davis E. King <davis@dlib.net>
2021-11-06 18:33:31 -04:00
Adrià Arrufat
2e8bac1915
Add dnn self supervised learning example (#2434)
* wip: loss goes down when training without a dnn_trainer

if I use a dnn_trainer, it segfaults (also with bigger batch sizes...)

* remove commented code

* fix gradient computation (hopefully)

* fix loss computation

* fix crash in input_rgb_image_pair::to_tensor

* fix alias tensor offset

* refactor loss and input layers and complete the example

* add more data augmentation

* add documentation

* add documentation

* small fix in the gradient computation and reuse terms

* fix warning in comment

* use tensor_tools instead of matrix to compute the gradients

* complete the example program

* add support for mult-gpu

* Update dlib/dnn/input_abstract.h

* Update dlib/dnn/input_abstract.h

* Update dlib/dnn/loss_abstract.h

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* Update examples/dnn_self_supervised_learning_ex.cpp

* [TYPE_SAFE_UNION] upgrade (#2443)

* [TYPE_SAFE_UNION] upgrade

* MSVC doesn't like keyword not

* MSVC doesn't like keyword and

* added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types

* - didn't need is_void anymore
- added result_of_t
- didn't really need ostream_helper or istream_helper
- split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)

* - updated abstract file

* - added get_type_t
- removed deserialize_helper dupplicate
- don't use std::decay_t, that's c++14

* - removed white spaces
- don't need a return-statement when calling apply_to_contents_impl()
- use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum

* - added type_safe_union_size
- added type_safe_union_size_v if C++14 is available
- added tests for above

* - test type_safe_union_size_v

* testing nested unions with visitors.

* re-added comment

* added index() in abstract file

* - refactored reset() to clear()
- added comment about clear() in abstract file
- in deserialize(), only reset the object if necessary

* - removed unecessary comment about exceptions
- removed unecessary // -------------
- struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
- get_type and get_type_t are private. Client code shouldn't need this.
- shuffled some functions around
- type_safe_union_size and type_safe_union_size_v are removed. not needed
- reset() -> clear()
- bug fix in deserialize() index counts from 1, not 0
- improved the abstract file

* refactored index() to get_current_type_id() as per suggestion

* maybe slightly improved docs

* - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
- apply_to_contents() now always calls visit()

* example with private visitor using friendship with non-void return types.

* Fix up contracts

It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h

* - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
- helper_copy -> helper_forward
- added validate_type<T> in a couple of places

* - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
- use enable_if<is_valid<T>> in favor of validate_type<T>()

* - use enable_if<is_valid<T>> in favor of validate_type<T>()

* - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust

Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>

* Just minor cleanup of docs and renamed some stuff, tweaked formatting.

* fix spelling error

* fix most vexing parse error

Co-authored-by: Davis E. King <davis@dlib.net>
Co-authored-by: pfeatherstone <45853521+pfeatherstone@users.noreply.github.com>
Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
2021-10-29 22:26:38 -04:00
Adrià Arrufat
adca7472df
Add support for fused convolutions (#2294)
* add helper methods to implement fused convolutions

* fix grammar

* add method to disable affine layer and updated serialization

* add documentation for .disable()

* add fuse_convolutions visitor and documentation

* update docs: net is not constant

* fix xml formatting and use std::boolalpha

* fix warning and updated net requirement for visitor

* fix segfault in fuse_convolutions visitor

* copy unconditionally

* make the visitor class a friend of the con_ class

* setup the biases alias tensor after enabling bias

* simplify visitor a bit

* fix comment

* setup the biases size, somehow this got lost

* copy the parameters before resizing

* remove enable_bias() method, since the visitor is now a friend

* Revert "remove enable_bias() method, since the visitor is now a friend"

This reverts commit 35b92b1631.

* update the visitor to remove the friend requirement

* improve behavior of enable_bias

* better describe the behavior of enable_bias

* wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias

* wip: fix cpu compilation

* WIP: not working fused ReLU

* WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work)

* WIP: more general set of 4d tensor (still not working)

* fused convolutions seem to be working now, more testing needed

* move visitor to the bottom of the file

* fix CPU-side and code clean up

* Do not try to fuse the activation layers

Fusing the activation layers in one cuDNN call is only supported when using
the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising
behavior. So, let's just fuse the batch norm and the convolution into one
cuDNN call using the IDENTITY activation function.

* Set the correct forward algorithm for the identity activation

Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward

* move the affine alias template to its original position

* wip

* remove unused param in relu and simplify example (I will delete it before merge)

* simplify conv bias logic and fix deserialization issue

* fix enabling bias on convolutions

* remove test example

* fix typo

* update documentation

* update documentation

* remove ccache leftovers from CMakeLists.txt

* Re-add new line

* fix enable/disable bias on unallocated networks

* update comment to mention cudnnConvolutionBiasActivationForward

* fix typo

Co-authored-by: Davis E. King <davis@dlib.net>

* Apply documentation suggestions from code review

Co-authored-by: Davis E. King <davis@dlib.net>

* update affine docs to talk in terms of gamma and beta

* simplify tensor_conv interface

* fix tensor_conv operator() with biases

* add fuse_layers test

* add an example on how to use the fuse_layers function

* fix typo

Co-authored-by: Davis E. King <davis@dlib.net>
2021-10-11 10:48:56 -04:00
Jakub Mareda
960e8a014f
Missing include for dlib::loss_multiclass_log_per_pixel_ (#2432)
* Missing include for `dlib::loss_multiclass_log_per_pixel_::label_to_ignore`

I was trying to compile the examples and encountered this issue after moving `rgb_label_image_to_index_label_image` to cpp file. Headers should include all symbols they mention.

* Update pascal_voc_2012.h

Should use the official entrypoint for including dnn stuff.

Co-authored-by: Davis E. King <davis685@gmail.com>
2021-09-15 08:27:24 -04:00
Adrià Arrufat
16500906b0
YOLO loss (#2376) 2021-07-29 20:05:54 -04:00
Abdolkarim Saeedi
7b5b375026
Update dnn_inception_ex.cpp (#2256)
Simple typo in the inception training
2020-12-09 07:37:45 -05:00
Adrià Arrufat
a7627cbd07
Rename function to disable_duplicative_biases (#2246)
* Rename function to disable_duplicative_biases

* rename also the functions in the tests... oops
2020-11-24 22:07:04 -05:00
Adrià Arrufat
3c82c2259c
Add Layer Normalization (#2213)
* wip: layer normalization on cpu

* wip: add cuda implementation, nor working yet

* wip: try to fix cuda implementation

* swap grid_strid_range and grid_strid_range_y: does not work yet

* fix CUDA implementation

* implement cuda gradient

* add documentation, move layer_norm, update bn_visitor

* add tests

* use stddev instead of variance in test (they are both 1, anyway)

* add test for means and invstds on CPU and CUDA

* rename visitor to disable_duplicative_bias

* handle more cases in the visitor_disable_input_bias

* Add tests for visitor_disable_input_bias
2020-10-20 07:56:55 -04:00
Adrià Arrufat
5ec60a91c4
Show how to use the new visitors with lambdas (#2162) 2020-09-06 09:27:50 -04:00
Davis King
afe19fcb8b Made the DNN layer visiting routines more convenient.
Now the user doesn't have to supply a visitor capable of visiting all
layers, but instead just the ones they are interested in.  Also added
visit_computational_layers() and visit_computational_layers_range()
since those capture a very common use case more concisely than
visit_layers().  That is, users generally want to mess with the
computational layers specifically as those are the stateful layers.
2020-09-05 18:33:04 -04:00
Adrià Arrufat
e7ec6b7777
Add visitor to remove bias from bn_ layer inputs (#closes 2155) (#2156)
* add visitor to remove bias from bn_ inputs (#closes 2155)

* remove unused parameter and make documentation more clear

* remove bias from bn_ layers too and use better name

* let the batch norm keep their bias, use even better name

* be more consistent with impl naming

* remove default constructor

* do not use method to prevent some errors

* add disable bias method to pertinent layers

* update dcgan example

- grammar
- print number of network parameters to be able to check bias is not allocated
- at the end, give feedback to the user about what the discriminator thinks about each generated sample

* fix fc_ logic

* add documentation

* add bias_is_disabled methods and update to_xml

* print use_bias=false when bias is disabled
2020-09-02 21:59:19 -04:00
Adrià Arrufat
64ba66e1c7
fix receptive field comment (#2070) 2020-04-27 06:02:26 -04:00
ncoder-1
8055b8d19a
Update dnn_introduction_ex.cpp (#2066)
Changed C-style cast to static_cast.
2020-04-22 07:37:58 -04:00
Davis King
fbb2db2188 fix example cmake script 2020-04-04 09:55:08 -04:00
Adrià Arrufat
5a715fe24d
Remove outdated comment from DCGAN example (#2048)
* Remove outdated comment

That comment was there from when I was using a dnn_trainer to train
the discriminator network.

* Fix case
2020-04-02 07:14:42 -04:00
Adrià Arrufat
e9c56fb21a
Fix warnings while running the tests (#2046)
* fix some warnings when running tests

* rever changes in CMakeLists.txt

* update example make use of newly promoted method

* update tests to make use of newly promoted methods
2020-03-31 19:35:23 -04:00
Adrià Arrufat
57bb5eb58d
use running stats to track losses (#2041) 2020-03-30 20:20:50 -04:00
Davis King
0057461a62 Promote some of the sub-network methods into the add_loss_layer interface so users don't have to write .subnet() so often. 2020-03-29 12:17:56 -04:00
Adrià Arrufat
f42f100d0f
Add DCGAN example (#2035)
* wip: dcgan-example

* wip: dcgan-example

* update example to use leaky_relu and remove bias from net

* wip

* it works!

* add more comments

* add visualization code

* add example documentation

* rename example

* fix comment

* better comment format

* fix the noise generator seed

* add message to hit enter for image generation

* fix srand, too

* add std::vector overload to update_parameters

* improve training stability

* better naming of variables

make sure it is clear we update the generator with the discriminator's
gradient using fake samples and true labels

* fix comment: generator -> discriminator

* update leaky_relu docs to match the relu ones

* replace not with !

* add Davis' suggestions to make training more stable

* use tensor instead of resizable_tensor

* do not use dnn_trainer for discriminator
2020-03-29 11:07:38 -04:00
Adrià Arrufat
c832d3b2fc
simplify resnet definition by reusing struct template parameter (#2010)
* simplify definition by reusing struct template parameter

* put resnet into its own namespace

* fix infer names

* rename struct impl to def
2020-03-09 21:21:04 -04:00
Davis King
fc6992ac04 A little bit of cleanup 2020-02-07 08:12:18 -05:00
Adrià Arrufat
10d7f119ca
Add dnn_introduction3_ex (#1991)
* Add dnn_introduction3_ex
2020-02-07 07:59:36 -05:00
Juha Reunanen
bd6994cc66 Add new loss layer for binary loss per pixel (#1976)
* Add new loss layer for binary loss per pixel
2020-01-20 07:47:47 -05:00
Davis King
f2cd9e3b1d use a time based exeuction limit in example 2019-11-28 10:48:02 -05:00
Juha Reunanen
d175c35074 Instance segmentation (#1918)
* Add instance segmentation example - first version of training code

* Add MMOD options; get rid of the cache approach, and instead load all MMOD rects upfront

* Improve console output

* Set filter count

* Minor tweaking

* Inference - first version, at least compiles!

* Ignore overlapped boxes

* Ignore even small instances

* Set overlaps_ignore

* Add TODO remarks

* Revert "Set overlaps_ignore"

This reverts commit 65adeff1f8.

* Set result size

* Set label image size

* Take ignore-color into account

* Fix the cropping rect's aspect ratio; also slightly expand the rect

* Draw the largest findings last

* Improve masking of the current instance

* Add some perturbation to the inputs

* Simplify ground-truth reading; fix random cropping

* Read even class labels

* Tweak default minibatch size

* Learn only one class

* Really train only instances of the selected class

* Remove outdated TODO remark

* Automatically skip images with no detections

* Print to console what was found

* Fix class index problem

* Fix indentation

* Allow to choose multiple classes

* Draw rect in the color of the corresponding class

* Write detector window classes to ostream; also group detection windows by class (when ostreaming)

* Train a separate instance segmentation network for each classlabel

* Use separate synchronization file for each seg net of each class

* Allow more overlap

* Fix sorting criterion

* Fix interpolating the predicted mask

* Improve bilinear interpolation: if output type is an integer, round instead of truncating

* Add helpful comments

* Ignore large aspect ratios; refactor the code; tweak some network parameters

* Simplify the segmentation network structure; make the object detection network more complex in turn

* Problem: CUDA errors not reported properly to console
Solution: stop and join data loader threads even in case of exceptions

* Minor parameters tweaking

* Loss may have increased, even if prob_loss_increasing_thresh > prob_loss_increasing_thresh_max_value

* Add previous_loss_values_dump_amount to previous_loss_values.size() when deciding if loss has been increasing

* Improve behaviour when loss actually increased after disk sync

* Revert some of the earlier change

* Disregard dumped loss values only when deciding if learning rate should be shrunk, but *not* when deciding if loss has been going up since last disk sync

* Revert "Revert some of the earlier change"

This reverts commit 6c852124ef.

* Keep enough previous loss values, until the disk sync

* Fix maintaining the dumped (now "effectively disregarded") loss values count

* Detect cats instead of aeroplanes

* Add helpful logging

* Clarify the intention and the code

* Review fixes

* Add operator== for the other pixel types as well; remove the inline

* If available, use constexpr if

* Revert "If available, use constexpr if"

This reverts commit 503d4dd335.

* Simplify code as per review comments

* Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh

* Clarify console output

* Revert "Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh"

This reverts commit 9191ebc776.

* To keep the changes to a bare minimum, revert the steps_since_last_learning_rate_shrink change after all (at least for now)

* Even empty out some of the previous test loss values

* Minor review fixes

* Can't use C++14 features here

* Do not use the struct name as a variable name
2019-11-14 22:53:16 -05:00
Davis King
1b83016abd update docs 2019-10-24 20:15:34 -04:00
Davis King
39327e71b7 Added note about using cmake's new fetch content feature. 2019-10-24 07:50:30 -04:00
Davis King
fced3587f1 fixing grammar 2019-07-27 09:03:14 -04:00
Davis King
5d03b99a08 Changed to avoid compiler warning. 2019-03-03 20:12:43 -05:00
Juha Reunanen
f685cb4249 Add U-net style skip connections to the semantic-segmentation example (#1600)
* Add concat_prev layer, and U-net example for semantic segmentation

* Allow to supply mini-batch size as command-line parameter

* Decrease default mini-batch size from 30 to 24

* Resize t1, if needed

* Use DenseNet-style blocks instead of residual learning

* Increase default mini-batch size to 50

* Increase default mini-batch size from 50 to 60

* Resize even during the backward step, if needed

* Use resize_bilinear_gradient for the backward step

* Fix function call ambiguity problem

* Clear destination before adding gradient

* Works OK-ish

* Add more U-tags

* Tweak default mini-batch size

* Define a simpler network when using Microsoft Visual C++ compiler; clean up the DenseNet stuff (leaving it for a later PR)

* Decrease default mini-batch size from 24 to 23

* Define separate dnn filename for MSVC++ and not

* Add documentation for the resize_to_prev layer; move the implementation so that it comes after mult_prev

* Fix previous typo

* Minor formatting changes

* Reverse the ordering of levels

* Increase the learning-rate stopping criterion back to 1e-4 (was 1e-8)

* Use more U-tags even on Windows

* Minor formatting

* Latest MSVC 2017 builds fast, so there's no need to limit the depth any longer

* Tweak default mini-batch size again

* Even though latest MSVC can now build the extra layers, it does not mean we should add them!

* Fix naming
2019-01-06 09:11:39 -05:00
Juha Reunanen
cf5e25a95f Problem: integer overflow when calculating sizes (may happen e.g. with very large images) (#1148)
* Problem: integer overflow when calculating sizes (may happen e.g. with very large images)
Solution: change some types from (unsigned) long to size_t

# Conflicts:
#	dlib/dnn/tensor.h

* Fix the fact that std::numeric_limits<unsigned long>::max() isn't always the same number

* Revert serialization changes

* Review fix: use long long instead of size_t

* From long to long long all the way

* Change more types to (hopefully) make the compiler happy

* Change many more types to size_t

* Change even more types to size_t

* Minor type changes
2018-03-01 07:27:29 -05:00
Davis King
e6fe1e0259 merged 2017-12-25 08:51:15 -05:00
Davis King
c9faacce29 Fixed typos 2017-12-25 08:50:34 -05:00