specification and this means "make the filter cover the whole input image
dimension". So it's just an easy way to make a filter sized exactly so that it
will have one output along that dimension.
* generic_image all the way
tried to hunt down and correct the functions that were using a
non-generic_image approach to dlib’s generic images.
* generic image fix fix
Had to change a couple of const_image_view to non-const versions so array access is possible in the rest of the code
* same
same
* back to sanity
* Add example of semantic segmentation using the PASCAL VOC2012 dataset
* Add note about Debug Information Format when using MSVC
* Make the upsampling layers residual as well
* Fix declaration order
* Use a wider net
* trainer.set_iterations_without_progress_threshold(5000); // (was 20000)
* Add residual_up
* Process entire directories of images (just easier to use)
* Simplify network structure so that builds finish even on Visual Studio (faster, or at all)
* Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle
* Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call
* Review fix: remove the batch normalization layer from right before the loss
* Review fix: point out that only the Visual C++ compiler has problems.
Also expand the instructions how to run MSBuild.exe to circumvent the problems.
* Review fix: use dlib::match_endings
* Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from.
* Review fix: make formatting comply with dlib style conventions.
* Review fix: output training parameters.
* Review fix: remove #ifndef __INTELLISENSE__
* Review fix: use std::string instead of char*
* Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter
* Fix whitespace formatting
* Add more comments
* Fix finding image files for inference
* Resize inference test output to the size of the input; add clarifying remarks
* Resize net output even in calculate_accuracy
* After all crop the net output instead of resizing it by interpolation
* For clarity, add an empty line in the console output
* Determine lapack fortran linking convention in CMake
Looks for lapack function with and without trailing underscore - allows use of CLAPACK on windows where functions are decorated but fortran_id.h otherwise assumes they are not
* Use enable_preprocessor_switch for LAPACK decoration detection
* Add lapack decoration defines to config.h.in
* Use correct variable for lapack_libraries
general templates in dlib::relational_operators. I did this because the
templates in dlib::relational_operators sometimes cause clashes with other
code in irritating ways.
output 2 more statistics, which are the mean absolute error and the standard
deviation of the absolute error. This means these functions now return 4D
rather than 2D vectors.
I also made test_regression_function() take a non-const reference to the
regression function so that DNN objects can be tested.
* Remove explicit specification of library path in dlib.cmake
Enables side-by-side multi configuration build on windows
* Add dlib_LIBS
For backwards compatability
0.5*MSE. The only thing this effects is the logging messages that print during
training, which were confusing since the reported loss was half the size you
would expect.