* * Corrected interpolate_bilinear for non-RGB images not to collapse into grayscale (#2089)
* * interpolate_bilinear uses now pixel_to_vector for shorter code.
* pixels now have operator!=.
* * Explicitely float interpolation
* Using C++11 static_assert() in interpolation.
* * Corrected documentation for interpolate_bilinear, interpolate_quadratic
* * Corrected formatting near interpolate_bilinear
* wip: attempt to use cuda for loss mse channel
* wip: maybe this is a step in the right direction
* Try to fix dereferencing the truth data (#1)
* Try to fix dereferencing the truth data
* Fix memory layout
* fix loss scaling and update tests
* rename temp1 to temp
* readd lambda captures for output_width and output_height
clangd was complaining about this, and suggested me to remove them
in the first, place:
```
Lambda capture 'output_height' is not required to be captured for this use (fix available)
Lambda capture 'output_width' is not required to be captured for this use (fix available)
```
* add a weighted_loss typedef to loss_multiclass_log_weighted_ for consistency
* update docs for weighted losses
* refactor multi channel loss and add cpu-cuda tests
* make operator() const
* make error relative to the loss value
Co-authored-by: Juha Reunanen <juha.reunanen@tomaattinen.com>
* fix some warnings when running tests
* rever changes in CMakeLists.txt
* update example make use of newly promoted method
* update tests to make use of newly promoted methods
* wip: dcgan-example
* wip: dcgan-example
* update example to use leaky_relu and remove bias from net
* wip
* it works!
* add more comments
* add visualization code
* add example documentation
* rename example
* fix comment
* better comment format
* fix the noise generator seed
* add message to hit enter for image generation
* fix srand, too
* add std::vector overload to update_parameters
* improve training stability
* better naming of variables
make sure it is clear we update the generator with the discriminator's
gradient using fake samples and true labels
* fix comment: generator -> discriminator
* update leaky_relu docs to match the relu ones
* replace not with !
* add Davis' suggestions to make training more stable
* use tensor instead of resizable_tensor
* do not use dnn_trainer for discriminator
* add leaky_relu activation layer
* add inplace case for leaky_relu and test_layer
* make clear that alpha is not learned by leaky_relu
* remove branch from cuda kernel
* Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory.
Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak.
* Add helpful comments
* add loss_multiclass_log_weighted
* fix class name in loss_abstract
* add loss_multiclass_log_weighted test
* rename test function to match class name
* fix typo
* reuse the weighted label struct across weighted losses
* do not break compatibility with loss_multiclass_log_per_pixel_weighted
* actually test the loss and fix docs
* fix build with gcc 9
* Prevention of compiler warning due to usage of int instead of a size type
* Conversion of status type to long to prevent compiler warnings
* The returned number of read items from a buffer is specified in numbers of type "streamsize"
Co-authored-by: Hartwig <git@skywind.eu>
* imglab: add support for using chinese whispers for more automatic clustering
* widgets: refactor out zooming from wheel handling
* tools/imglab/src/metadata_editor.cpp
imglab: add keyboard shortcuts for zooming