* #288 - add new layer loss_multiclass_log_matrixoutput for semantic-segmentation purposes
* In semantic segmentation, add capability to ignore individual pixels when computing gradients
* In semantic segmentation, 65535 classes ought to be enough for anybody
* Divide matrix output loss by matrix dimensions too, in order to make losses related to differently sized matrices more comparable
- note that this affects the required learning rate as well!
* Review fix: avoid matrix copy
* Review fix: rename to loss_multiclass_log_per_pixel
* Review fix: just use uint16_t as the label type
* Add more tests: check that network params and outputs are correct
* Improve error message when output and truth matrix dimensions do not match
* Add test case verifying that a single call of loss_multiclass_log_per_pixel equals multiple corresponding calls of loss_multiclass_log
* Fix test failure by training longer
* Remove the test case that fails on Travis for some reason, even though it works on AppVeyor and locally
easily pass arguments to any optional parameters of a loss layer's to_tensor()
routine. For instance, it makes it more convenient to set loss_mmod_'s
adjust_threshold parameter.
apparently never finishing whatever it's trying to do. Moreover,
this issue prevents some operations like switching from Debug to
Release (and vice versa) in the IDE. (Your mileage may vary.)
Workaround: Keep manually killing the vcpkgsrv.exe process.
Solution: Disable IntelliSense for some files. Which files? Unfortunately
this seems to be a trial-and-error process.
histogram is equalized or unmodified. This way, if you are looking at
particularly dark or badly contrasted images you can toggle this mode and maybe
get a better view of what you are labeling.
min and max object height, it's now min and max object size. This way, if you
have objects that are short and wide (i.e. objects where the relevant dimension
is width rather than height) you will get sensible behavior out of the random
cropper.
everything in it, it now makes a python file as before but an additional binary
file with all the weights in it. This way, if you are working with a network
with a very large number of weights you won't end up with a crazy large python
script.