particular, rather than just dumping exactly 400 of the last loss values, it
now dumps 400 + 10% of the loss buffer. This way, the amount of the dump is
proportional to the steps without progress threshold. This is better because
when the user sets the steps without progress to something larger it probably
means you need to look at more loss values to determine that we should stop,
so dumping more in that case ought to be better.
* Problem: log loss may become infinite, if g[idx] goes zero
Solution: limit the input of the log function to 1e-6 (or more)
* Parameterize the safe_log epsilon limit, and make the default value 1e-10
specification and this means "make the filter cover the whole input image
dimension". So it's just an easy way to make a filter sized exactly so that it
will have one output along that dimension.
* generic_image all the way
tried to hunt down and correct the functions that were using a
non-generic_image approach to dlib’s generic images.
* generic image fix fix
Had to change a couple of const_image_view to non-const versions so array access is possible in the rest of the code
* same
same
* back to sanity
* Add example of semantic segmentation using the PASCAL VOC2012 dataset
* Add note about Debug Information Format when using MSVC
* Make the upsampling layers residual as well
* Fix declaration order
* Use a wider net
* trainer.set_iterations_without_progress_threshold(5000); // (was 20000)
* Add residual_up
* Process entire directories of images (just easier to use)
* Simplify network structure so that builds finish even on Visual Studio (faster, or at all)
* Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle
* Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call
* Review fix: remove the batch normalization layer from right before the loss
* Review fix: point out that only the Visual C++ compiler has problems.
Also expand the instructions how to run MSBuild.exe to circumvent the problems.
* Review fix: use dlib::match_endings
* Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from.
* Review fix: make formatting comply with dlib style conventions.
* Review fix: output training parameters.
* Review fix: remove #ifndef __INTELLISENSE__
* Review fix: use std::string instead of char*
* Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter
* Fix whitespace formatting
* Add more comments
* Fix finding image files for inference
* Resize inference test output to the size of the input; add clarifying remarks
* Resize net output even in calculate_accuracy
* After all crop the net output instead of resizing it by interpolation
* For clarity, add an empty line in the console output
* Determine lapack fortran linking convention in CMake
Looks for lapack function with and without trailing underscore - allows use of CLAPACK on windows where functions are decorated but fortran_id.h otherwise assumes they are not
* Use enable_preprocessor_switch for LAPACK decoration detection
* Add lapack decoration defines to config.h.in
* Use correct variable for lapack_libraries
general templates in dlib::relational_operators. I did this because the
templates in dlib::relational_operators sometimes cause clashes with other
code in irritating ways.