* [FFT] added fft, ifft, fft_inplace and ifft_inplace overloads for std::vector
* [FFT] - static_assert T is a floating point type. There are static asserts in mkl_fft and kiss_fft, but it doesn't hurt adding them in the matrix API too so users get helpful warnings higher up in the API.
* [FFT] - added documentation for std::vector overloads in matrix_fft_abstract.h file
Co-authored-by: pf <pf@pf-ubuntu-dev>
* Early termination for find_{min,max}_global
This patch adds a callback to allow the user to request cancellation of a
search using find_{min,max}_global. This enables users to cancel
searches when they are no-longer relevent, or when the user has some
special knowledge of the solution that they can use the stop the search
early.
closes #2250
* Moved default stopping condition into find_max_global.h since that's the code it relates to and did some minor cleanup.
Co-authored-by: Davis King <davis@dlib.net>
This also makes it so the num and max_runtime arguments can now appear
in any order.
This does include a minor backwards compatibility break. Which is
someone passing in initial function evaluations by directly supplying an
initializer list like {function_evaluation({1.1, 0.9}, rosen({1.1, 0.9}))} may have
to do std::vector<function_evaluation>{function_evaluation({1.1, 0.9},
rosen({1.1, 0.9}))} instead or make it a variable. This is due to C++
not supporting direct use of initializer lists with variadic templates in this
context. But in any case, I doubt many users do this and it is not hard
for those that do to update as described above.
* fix find_min_global finding wrong answers
Previously, find_min_global would produce wrong output when passed
a collection of initial evaluations because the solver expected the
y-values to be multiplied by -1. This fix does that when minimizing.
closes#2283
* fixed tabbing
Co-authored-by: Davis King <davis@dlib.net>
* [FFT] added kissfft wrappers, moved kiss and mkl wrappers into separate files, call the right functions in matrix_fft.h
Co-authored-by: pf <pf@pf-ubuntu-dev>
Co-authored-by: Davis King <davis@dlib.net>
that cuda is available even though it knows it didn't find cublas, which
is part of the standard cuda install. So we need to add a check to see
if cmake *really for realz* found cuda.
* Added exponential distribution
* tab problem removed?
* forgot std::
* Also added Weibull distribution. Very useful indeed.
* Simple Weibull distribution unit test
* don't forget std::
* sorry, typo
* [RAND] - seed the random number generators for consistency and no nasty surprises
- added parameter for tolerance
- added unit test for exponential distribution
* [RAND] print the spinner more often
* [RAND] up the tolerance for kurtosis a bit
* [RAND] refactored parameters to reflect documentation on wikipedia.
* [RAND] added documentation to _abstract
* [RAND] i switched the order of the arguments to get_random_weibull and didn't update the unit tests. oops
Co-authored-by: pf <pf@pf-ubuntu-dev>
* fix typos
* add cifar-10
* open files in binary mode
* print messages with file name only, like mnist loader
* some fixes
* add mnist.cpp to CMakeLists.txt
* fix test index
* do not use iterator in cast
* add cifar.cpp to all
* Add Davis' suggestions
* no need to use namespace std and clean up empty lines
* [SERIALIZATION] fixed bug when (de)serializing vector<complex<float>>. DLIB_DEFINE... macro uses __out and __in variables names for ostream and istream objects respectively to avoid member variable name conflicts.
* Refactoring objects in DLIB_DEFINE_DEFAULT_SERIALIZATION to avoid name conflicts with user types
* Refactoring objects in DLIB_DEFINE_DEFAULT_SERIALIZATION to avoid name conflicts with user types
* removed tabs
* removed more tabs
Co-authored-by: pf <pf@pf-ubuntu-dev>
* wip: layer normalization on cpu
* wip: add cuda implementation, nor working yet
* wip: try to fix cuda implementation
* swap grid_strid_range and grid_strid_range_y: does not work yet
* fix CUDA implementation
* implement cuda gradient
* add documentation, move layer_norm, update bn_visitor
* add tests
* use stddev instead of variance in test (they are both 1, anyway)
* add test for means and invstds on CPU and CUDA
* rename visitor to disable_duplicative_bias
* handle more cases in the visitor_disable_input_bias
* Add tests for visitor_disable_input_bias
* add cuda test for loss_binary_log_per_pixel and some needed refactoring
* add cuda test for loss_multiclass_log_per_pixel
* forgot to add cpu version in loss
* remove a line I added by mistake
* fix typos
* declare label_to_ignore as static
* use tensor_index function instead of index method
* test cuda and cpu gradients values
* use DLIB_TEST instead of DLIB_CASSERT
* add cuda implementation for loss_multiclass_log_per_pixel_weighted
* add test for cuda and cpu implementations
* fix comment
* move weighted label to its own file
* Update path in doc
Co-authored-by: Davis E. King <davis685@gmail.com>