* wip: dcgan-example
* wip: dcgan-example
* update example to use leaky_relu and remove bias from net
* wip
* it works!
* add more comments
* add visualization code
* add example documentation
* rename example
* fix comment
* better comment format
* fix the noise generator seed
* add message to hit enter for image generation
* fix srand, too
* add std::vector overload to update_parameters
* improve training stability
* better naming of variables
make sure it is clear we update the generator with the discriminator's
gradient using fake samples and true labels
* fix comment: generator -> discriminator
* update leaky_relu docs to match the relu ones
* replace not with !
* add Davis' suggestions to make training more stable
* use tensor instead of resizable_tensor
* do not use dnn_trainer for discriminator
* add leaky_relu activation layer
* add inplace case for leaky_relu and test_layer
* make clear that alpha is not learned by leaky_relu
* remove branch from cuda kernel
* Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory.
Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak.
* Add helpful comments
* add loss_multiclass_log_weighted
* fix class name in loss_abstract
* add loss_multiclass_log_weighted test
* rename test function to match class name
* fix typo
* reuse the weighted label struct across weighted losses
* do not break compatibility with loss_multiclass_log_per_pixel_weighted
* actually test the loss and fix docs
* fix build with gcc 9
* Prevention of compiler warning due to usage of int instead of a size type
* Conversion of status type to long to prevent compiler warnings
* The returned number of read items from a buffer is specified in numbers of type "streamsize"
Co-authored-by: Hartwig <git@skywind.eu>
* imglab: add support for using chinese whispers for more automatic clustering
* widgets: refactor out zooming from wheel handling
* tools/imglab/src/metadata_editor.cpp
imglab: add keyboard shortcuts for zooming
cuda_data_ptr<T>. Also moved some memcpy() functions to namespace scope
so that calling them like dlib::cuda::memcpy() can referene them. It
was slightly annoting before.
* Adding Mish activation function
* Bug fixed
* Added test for Mish
* Removed unwanted comments
* Simplified calculation and removed comments
* Kernel added and gradient computation simplified
* Gradient simplified
* Corrected gradient calculations
* Compute output when input greater than 8
* Minor correction
* Remove unnecessary pgrad for Mish
* Removed CUDNN calls
* Add standalone CUDA implementation of the Mish activation function
* Fix in-place gradient in the CUDA version; refactor a little
* Swap delta and omega
* Need to have src (=x) (and not dest) available for Mish
* Add test case that makes sure that cuda::mish and cpu::mish return the same results
* Minor tweaking to keep the previous behaviour
Co-authored-by: Juha Reunanen <juha.reunanen@tomaattinen.com>