Also changed the default outer border padding from 0 to 11. This effects even
previously trained models. So any model that doesn't explicitly set the outer
patting to something else will have a padding of 11. This should be a more
reasonable value for most networks.
better results when run on float and double images. There was needless
rounding to integers happening in the bilinear interpolation. Now if you work
with a float image the entire process will run without integer rounding.
* Make noncopyable constructor and destructor default
C++11 provides the functionality.
Defining empty functions cause all classes derived from noncopyable
to be non-trivially constructible and non-trivially destructible.
For example, matrix with compile-time layout by definition does not
require an explicit destructor and should be trivially destructible
; however, deriving from noncopyable makes it non-trivially
destrutible. This also affects vector<T, 2> and vector<T, 3>.
* Delete array2d copy constructor and assignment operators
PNG_LIBRARY set by libpng's FindPNG.cmake does not contain zlib dependancy. This causes the CHECK_FUNCTION_EXISTS(png_create_read_struct LIBPNG_IS_GOOD) to fail with liner errors, and for dlib to use it's internal copy of PNG.
Updated to use libpng's PNG_LIBRARIES variable. This also sets both PNG and ZLib libraries in dlib_needed_libraries.
when it determines that there have been a lot of steps without progress and
shrinks the learning rate. Instead, it removes only the oldest 100. The
problem with the old way of removing all the loss values in the history was
that if you set the steps without progress threshold to a really high number
you would often observe that the last few learning rate values were obviously
not making progress, however, since all the previous loss values were forgotten
the trainer needed to fully populate it's loss history from scratch before it
would figure this out. This new style makes the trainer not waste time running
this excessive optimization of obviously useless mini-batches.
across scales regardless of the input image size. Previously, if you gave
really large images or really small images it had a bias towards giving only
large patches or small patches respectively.
concat layer's backward() method. It was assigning the gradient to previous
layers instead of adding the gradient, as required by the layer interface
specification. This change also noticeably speeds up concat layers since only
one CUDA kernel launch now happens per concat operation, rather than one
kernel launch for each sample in a tensor.