* Add per-pixel mean square loss
* Add documentation of loss_mean_squared_per_pixel_
* Add test case for per-pixel mean square loss: a simple autoencoder
* Review fix: reorder params of function tensor_index, so that the order corresponds to the convention used in the rest of the dlib code base
* Review fix: add breaks as intended, and change the rest of the test accordingly
* Again a case where the tests already work locally for me, but not on AppVeyor/Travis - this commit is a blindfolded attempt to fix the problem
(and it also fixes a compiler warning)
* remove linking to libpython on linux
* add OSX libpython free building
* add automatic discovery of include python dir back in
* make the libs non required for building on manylinux
Test on a given video like this cv::VideoCapture cap("Sample.avi") may be broken when the video frames are not enough before the main window is closed by the user.
* Add new loss for weighted pixel inputs (may be useful e.g. to emphasize rare classes)
* Deduplicate method loss_multiclass_log_per_pixel_(weighted_)::to_label
* Add a simple test case for weighted inputs
(also, fix a typo in test_tensor_resize_bilienar's name)
* Add loss_multiclass_log_per_pixel_weighted_ to loss_abstract.h
* Decrease the amount of weighting
* There's no need to train for a very long time
* Added check to see if __ARM_NEON__ is defined. Now we can use the following command: cmake --build --config Release ..
* Rename to use_arm_neon.cmake to check_if_neon_available.cmake for clarity, minor tidying up of script, and simplifying try_compile() code for ARM NEON.
* Problem: Visual Studio's vcpkgsrv.exe constantly uses a single CPU core,
apparently never finishing whatever it's trying to do. Moreover,
this issue prevents some operations like switching from Debug to
Release (and vice versa) in the IDE. (Your mileage may vary.)
Workaround: Keep manually killing the vcpkgsrv.exe process.
Solution: Disable IntelliSense for some files. Which files? Unfortunately
this seems to be a trial-and-error process.
* Disable IntelliSense for the ResNet declarations
* Disable IntelliSense for even more stuff
* Disable IntelliSense for all DNN unit tests
* #288 - add new layer loss_multiclass_log_matrixoutput for semantic-segmentation purposes
* In semantic segmentation, add capability to ignore individual pixels when computing gradients
* In semantic segmentation, 65535 classes ought to be enough for anybody
* Divide matrix output loss by matrix dimensions too, in order to make losses related to differently sized matrices more comparable
- note that this affects the required learning rate as well!
* Review fix: avoid matrix copy
* Review fix: rename to loss_multiclass_log_per_pixel
* Review fix: just use uint16_t as the label type
* Add more tests: check that network params and outputs are correct
* Improve error message when output and truth matrix dimensions do not match
* Add test case verifying that a single call of loss_multiclass_log_per_pixel equals multiple corresponding calls of loss_multiclass_log
* Fix test failure by training longer
* Remove the test case that fails on Travis for some reason, even though it works on AppVeyor and locally
easily pass arguments to any optional parameters of a loss layer's to_tensor()
routine. For instance, it makes it more convenient to set loss_mmod_'s
adjust_threshold parameter.