This allows us to greatly simplify the self supervised learning example:
- the computation in user code was a bit too distracting
- avoids duplicated computation/allocation of this matrix
- avoids edge case where net outputs are zero due to trainer synchronization
* typo
* - added compile time information to audio object. Not convinced this is needed actually. I'm perfectly happy just using the ffmpeg::frame object. I'm pretty sure I'm the only user who cares about audio.
- created resizing_args and resampling_args
* smaller videos for unit tests
* shorter videos for unit tests
* - decoder and demuxer: you now resize or resample at the time of read. therefore you don't set resizing or resampling parameters in constructor, but you pass them to read()
- added templated read() function
- simplified load_frame()
* inherit from resizing_args and resampling_args
* reorganised the tests to segragate decoding, demuxing, encoding and muxing as much as possible
* much more basic example
* demxing examples split
* examples
* fixing examples
* wip
* Fix load_frame()
* added frame - specific tests
* - makes sense to have a set_params() method rather than constructing a new object and moving. I mean, it works and it absolutely does the right thing, and in fact the same thing as calling set_params() now, but it can look a bit weird.
* notes on defaults and good pairings
* Update ffmpeg_demuxer.h
Watch out for `DLIB_ASSERT` statements. Maybe one of the unit tests should build with asserts enabled.
* Update ffmpeg_details.h
* Update ffmpeg_muxer.h
* WIP
* WIP
* - simplified details::resizer
- added frame::set_params()
- added frame::clear()
- forward packet directly into correct queue
* pick best codec if not specified
* added image data
* warn when we're choosing an appropriate codec
* test load_frame()
* - for some reason, you sometimes get warning messages about too many b-frames. Resetting pict_type suppresses this.
- you can move freshly decoded frames directly out.
* callback passed to push()
* I think it's prettier this way
* WIP
* full callback API for decoder
* updated tests
* updated example
* check the template parameter is callable and has 1 argument first before getting it's first argument
* Potential bug fix
* - write out the enable_if's explictly. It's fine. I think it's clear what's going on if someone cares
- guard push() with a boolean which asserts when recursion is detected
* pre-conditions on callbacks: no recursion
---------
Co-authored-by: pf <pf@me>
Co-authored-by: Your name <you@example.com>
* muxing
* Add HSV support (#2758)
* Add HSV support
* Add tests
* Update dlib/pixel.h
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
* Add HSV struct and make more things const
---------
Co-authored-by: Davis E. King <davis685@gmail.com>
* Fix imglab changing the current dir too soon (#2761)
* A bit of cleanup
---------
Co-authored-by: pf <pf@me>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
Co-authored-by: Davis E. King <davis685@gmail.com>
Co-authored-by: Davis King <davis@dlib.net>
* docs
* callbacks for encoder
* shorter video
* shorter video
* added is_byte type trait
* leave muxer for next PR
* added overloads for set_layout() and get_layout() in details namespace
* unit test
* example
* build
* overloads for ffmpeg < 5
* Update examples/ffmpeg_video_encoding_ex.cpp
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* as per suggestion
* remove requires clause
* Update examples/ffmpeg_video_encoding_ex.cpp
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_muxer.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* use dlib::logger
* oops
* Update dlib/media/ffmpeg_muxer.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Update dlib/media/ffmpeg_demuxer.h
* Update dlib/media/ffmpeg_demuxer.h
* Update dlib/media/ffmpeg_abstract.h
---------
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
Co-authored-by: Adrià Arrufat <1671644+arrufat@users.noreply.github.com>
* - enhanced list_muxers()
- added fail() error handling helper function
- moved framerate setting to decoder_image_args
* docs
* oops
* - don't use std::endl, use `\n` instead
- use fail(). See, on average, it removes lines of code
* convenient constructor for demuxer
* ffmpeg5 support
* added docs for == -1
* oops
* grouping audio channel compatibility stuff together
* more compatibility stuff
* more channel abstractions
* build with ffmpeg 5
* install assembler
* cache the installation
* cmake doesn't like using ~ in filepath
* at some point this will work
* i think i need to change the key
* test FFmpeg-n5.1.3_try3 cache
* bug fix
* Update build_cpp.yml
Giving this another go
* Update build_cpp.yml
Disable building documentation and CLI tools
* Update CMakeLists.txt
Fix cmake script when using 3.8.0 and expecting imported targets to work when there are link flags included
* - use environment variables
- on ubuntu 18 gcc7, use ffmpeg 3.2.18
* correct way of dereferencing variables ?
* can't get variables to work
* Revert "can't get variables to work"
This reverts commit 5eef96a43e.
* Revert "correct way of dereferencing variables ?"
This reverts commit e8ff95f5c6.
* Revert "- use environment variables"
This reverts commit a6938333d5.
* using ffmpeg 3.2.18 with ubuntu18 gcc7
* Update build_cpp.yml
Disable ubuntu18 job for now. Hopefully no more cancelled jobs, then i can re-enable
* Re-enabled ubuntu18 job. Hopefully this time it won't get cancelled
* Fixed bad indentation
* Can go in details namespace
* Update dlib/CMakeLists.txt
Co-authored-by: Davis E. King <davis685@gmail.com>
* use details namespace
* remove declaration. It's in details now
* don't need get_channels_from_layout()
---------
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
* - use add_executable directly
- use target_compile_definitions()
- strip binaries in release mode
* Added a comment
---------
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
* - added ffmpeg stuff to cmake
* - added observer_ptr
* ffmpeg utils
* WIP
* - added ffmpeg_decoder
* config file for test data
* another test file
* install ffmpeg
* added ffmpeg_demuxer
* install all ffmpeg libraries
* support older version of ffmpeg
* simplified loop
* - test converting to dlib object
- added docs
- support older ffmpeg
* added convert() overload
* added comment
* only register stuff when API not deprecated
* - fixed version issues
- fixed decoding
* added tests for ffmpeg_demuxer
* removed unused code
* test GIF
* added docs
* added audio test
* test for audio
* more tests
* review changes
* don't need observer_ptr
* made deps public. I could be wrong but just in case.
* - added some static asserts. Some areas of the code might do memcpy's on arrays of pixels. This requires the structures to be packed. Check this.
- added convert() functions
- changed default decoder options. By default, always decode to RGB and S16 audio
- added convenience constructor to demuxer
* - no longer need opencv
* oops. I let that slip
* - made a few functions public
- more precise requires clauses
* enhanced example
* - avoid FFMPEG_INITIALIZED being optimized away at link time
- added decoding example
* - avoid -Wunused-parameter error
* constexpr and noexcept correctness. This probably makes no difference to performance, BUT, it's what the core guidelines tell you to do. It does however demonstrate how complicated and unecessarily verbose C++ is becoming. Sigh, maybe one day i'll make the switch to something that doesn't make my eyes twitch.
* - simplified metadata structure
* hopefully more educational
* added another example
* ditto
* typo
* screen grab example
* whoops
* avoid -Wunused-parameter errors
* ditto
* - added methods to av_dict
- print the demuxer format options that were not used
- enhanced webcam_face_pose_ex.cpp so you can set webcam options
* if height and width are specified, attempt to set video_size in format_options. Otherwise set the bilinear resizer.
* updated docs
* once again, the ffmpeg APIs do a lot for you. It's a matter of knowing which APIs to call.
* made header-only
* - some Werror thing
* don't use type_safe_union
* - templated sample type
- reverted deep copy of AVFrame for frame copy constructor
* - added is_pixel_type and is_pixel_check
* unit tests for pixel traits
* enhanced is_image_type type trait and added is_image_check
* added unit tests for is_image_type
* added pix_traits, improved convert() functions
* bug fix
* get rid of -Werror=unused-variable error
* added a type alias
* that's the last of the manual memcpys gone. We'using ffmpeg API everywhere now for copying frames to buffers and back
* missing doc
* set framerate for webcam
* list input devices
* oops. I was trying to make ffmpeg 5 happy but i've given up on ffmpeg v5 compatibility in this PR. Future PR.
* enhanced the information provided by list_input_devices and list_output_devices
* removed vscode settings.json file
* - added a type trait for checking whether a type is complete. This is useful for writing type traits that check other types have type trait specializations. But also other useful things. For example, std::unique_ptr uses something similar to this.
* Davis was keen to simply check pixel_traits is specialised. That's equivalent to checking pixel_traits<> is complete for some type
* code review
* juse use the void_t in dlib/type_traits.h
* one liners
* just need is_image_check
* more tests for is_image_type
* i think this is correct
* removed printf
* better docs
* Keep opencv out of it
* keep old face pose example, then add new one which uses dlib's ffmpeg wrappers
* revert
* revert
* better docs
* better docs
---------
Co-authored-by: pf <pf@me>
I was using the data augmentation recommended for the ImageNet dataset, which is not well suited
for CIFAR-10.
After doing so, the test accuracy increased by 1 point.
I was using the normalized features za as both matrices, instead of za and zb.
I noticed this because the empirical cross-correlation matrix was symmetrical,
which is not supposed to. It does not affect anything, as it was computed
properly in the loss.
* Add multiclass SVM trainer to svm/auto.h
* Use a matrix<double> and add an overload for matrix<float>
* Replace typedef with using and use normalizer from normalized_function
* Remove extra ;
* use better names for decision function variables
* fix comments format and grammar
* remove unneeded includes
* Update dlib/svm/auto_abstract.h
* Update the assert to use 3 samples (as there is 3 fold CV)
* Remove unneeded captures in lambda
* Update dlib/svm/auto_abstract.h
* Update dlib/svm/auto_abstract.h
Co-authored-by: Davis E. King <davis685@gmail.com>
* Use only a fraction of labels for the multiclass SVM in SSL example
This change makes the self-supervised example more close to reality:
usually, only a fraction of the dataset is labeled, but we can harness
the whole dataset by using a self-supervised method and then train the
classifier using the fraction of labeled data.
Using 10% of the labels results in a test accuracy of 87%, compared to
the 89% we got when training the multiclass SVM with all labels.
I just added an option to change the fraction of labeled data, so that
users can experiment with it.
* Update examples/dnn_self_supervised_learning_ex.cpp
Co-authored-by: Davis E. King <davis685@gmail.com>
* Use add_compile_definitions, enable -Wpedantic and use colors
* Use lround in rectangle and drectangle
* Use round in bigint
* Use round in canvas_drawing
* Modernize image_transforms
* Modernize image_pyramid
* Fix error in image_pyramid
* Modernize matrix
* Fix error in image_pyramid again
* Modernize fhog test
* Modernize image_keypoint/surf
* Remove extra ;
* Add BGR(A) to pixel_traits
* add support for reading webp
* Apply Davis' suggestions and fix formatting
* Fix signed/unsigned warning
* Update decoding paths
* update pixel traits documentation
* Add support for writing WebP images
* Simplify image_saver code
* WIP: add tests, PSNR is low but images look good
* Add lossless compression for quality > 100
* Fix build when WebP support is disabled
* Use C++ stream instead of C-style FILE
* Fix indentation
* Use reinterpret_cast instead of C-style cast
* Improve impl::impl_save_webp signature
* Remove empty line
* Use switch statement and clean up code
* Update Copyright and test libwebp on Linux
* Fix formatting in github workflow
* Fix operator== for bgr_alpha_pixel
* Show where the test fails
* Add libwebp to CI for the remaining Linux workflows
* Use filename consistently
* Improve message with wrong pixel type
* Fix tests for WebP images
* Prevent saving images which are too large and improve error messages
* Use max dimension from WebP header directly
* Update documentation, index and release notes
* Update dlib/image_saver/save_webp_abstract.h
Co-authored-by: Martin T. H. Sandsmark <martin.sandsmark@kde.org>
Co-authored-by: Davis E. King <davis685@gmail.com>
* Fix Barlow Twins loss gradient
* Update reference test accuracy after fix
* Round the empirical cross-correlation matrix
Just a tiny modification that allows the values to actually reach 255 (perfect white).
* Replace fc classifier with svm_multiclass_linear_trainer
* Mention about find_max_global()
Co-authored-by: Davis E. King <davis@dlib.net>
* Use double instead of float for extracted features
Co-authored-by: Davis E. King <davis@dlib.net>
* fix compilation with double features
* Revert "fix compilation with double features"
This reverts commit 76ebab4b91.
* Revert "Use double instead of float for extracted features"
This reverts commit 9a50809ebf.
* Find best C using global optimization
Co-authored-by: Davis E. King <davis@dlib.net>
* wip: loss goes down when training without a dnn_trainer
if I use a dnn_trainer, it segfaults (also with bigger batch sizes...)
* remove commented code
* fix gradient computation (hopefully)
* fix loss computation
* fix crash in input_rgb_image_pair::to_tensor
* fix alias tensor offset
* refactor loss and input layers and complete the example
* add more data augmentation
* add documentation
* add documentation
* small fix in the gradient computation and reuse terms
* fix warning in comment
* use tensor_tools instead of matrix to compute the gradients
* complete the example program
* add support for mult-gpu
* Update dlib/dnn/input_abstract.h
* Update dlib/dnn/input_abstract.h
* Update dlib/dnn/loss_abstract.h
* Update examples/dnn_self_supervised_learning_ex.cpp
* Update examples/dnn_self_supervised_learning_ex.cpp
* Update examples/dnn_self_supervised_learning_ex.cpp
* Update examples/dnn_self_supervised_learning_ex.cpp
* [TYPE_SAFE_UNION] upgrade (#2443)
* [TYPE_SAFE_UNION] upgrade
* MSVC doesn't like keyword not
* MSVC doesn't like keyword and
* added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types
* - didn't need is_void anymore
- added result_of_t
- didn't really need ostream_helper or istream_helper
- split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)
* - updated abstract file
* - added get_type_t
- removed deserialize_helper dupplicate
- don't use std::decay_t, that's c++14
* - removed white spaces
- don't need a return-statement when calling apply_to_contents_impl()
- use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum
* - added type_safe_union_size
- added type_safe_union_size_v if C++14 is available
- added tests for above
* - test type_safe_union_size_v
* testing nested unions with visitors.
* re-added comment
* added index() in abstract file
* - refactored reset() to clear()
- added comment about clear() in abstract file
- in deserialize(), only reset the object if necessary
* - removed unecessary comment about exceptions
- removed unecessary // -------------
- struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
- get_type and get_type_t are private. Client code shouldn't need this.
- shuffled some functions around
- type_safe_union_size and type_safe_union_size_v are removed. not needed
- reset() -> clear()
- bug fix in deserialize() index counts from 1, not 0
- improved the abstract file
* refactored index() to get_current_type_id() as per suggestion
* maybe slightly improved docs
* - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
- apply_to_contents() now always calls visit()
* example with private visitor using friendship with non-void return types.
* Fix up contracts
It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide. Making it a precondition.
* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
* Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
* - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
- helper_copy -> helper_forward
- added validate_type<T> in a couple of places
* - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
- use enable_if<is_valid<T>> in favor of validate_type<T>()
* - use enable_if<is_valid<T>> in favor of validate_type<T>()
* - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust
Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
* Just minor cleanup of docs and renamed some stuff, tweaked formatting.
* fix spelling error
* fix most vexing parse error
Co-authored-by: Davis E. King <davis@dlib.net>
Co-authored-by: pfeatherstone <45853521+pfeatherstone@users.noreply.github.com>
Co-authored-by: pfeatherstone <peter@me>
Co-authored-by: pf <pf@me>
Co-authored-by: Davis E. King <davis685@gmail.com>
* add helper methods to implement fused convolutions
* fix grammar
* add method to disable affine layer and updated serialization
* add documentation for .disable()
* add fuse_convolutions visitor and documentation
* update docs: net is not constant
* fix xml formatting and use std::boolalpha
* fix warning and updated net requirement for visitor
* fix segfault in fuse_convolutions visitor
* copy unconditionally
* make the visitor class a friend of the con_ class
* setup the biases alias tensor after enabling bias
* simplify visitor a bit
* fix comment
* setup the biases size, somehow this got lost
* copy the parameters before resizing
* remove enable_bias() method, since the visitor is now a friend
* Revert "remove enable_bias() method, since the visitor is now a friend"
This reverts commit 35b92b1631.
* update the visitor to remove the friend requirement
* improve behavior of enable_bias
* better describe the behavior of enable_bias
* wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias
* wip: fix cpu compilation
* WIP: not working fused ReLU
* WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work)
* WIP: more general set of 4d tensor (still not working)
* fused convolutions seem to be working now, more testing needed
* move visitor to the bottom of the file
* fix CPU-side and code clean up
* Do not try to fuse the activation layers
Fusing the activation layers in one cuDNN call is only supported when using
the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising
behavior. So, let's just fuse the batch norm and the convolution into one
cuDNN call using the IDENTITY activation function.
* Set the correct forward algorithm for the identity activation
Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward
* move the affine alias template to its original position
* wip
* remove unused param in relu and simplify example (I will delete it before merge)
* simplify conv bias logic and fix deserialization issue
* fix enabling bias on convolutions
* remove test example
* fix typo
* update documentation
* update documentation
* remove ccache leftovers from CMakeLists.txt
* Re-add new line
* fix enable/disable bias on unallocated networks
* update comment to mention cudnnConvolutionBiasActivationForward
* fix typo
Co-authored-by: Davis E. King <davis@dlib.net>
* Apply documentation suggestions from code review
Co-authored-by: Davis E. King <davis@dlib.net>
* update affine docs to talk in terms of gamma and beta
* simplify tensor_conv interface
* fix tensor_conv operator() with biases
* add fuse_layers test
* add an example on how to use the fuse_layers function
* fix typo
Co-authored-by: Davis E. King <davis@dlib.net>
* Missing include for `dlib::loss_multiclass_log_per_pixel_::label_to_ignore`
I was trying to compile the examples and encountered this issue after moving `rgb_label_image_to_index_label_image` to cpp file. Headers should include all symbols they mention.
* Update pascal_voc_2012.h
Should use the official entrypoint for including dnn stuff.
Co-authored-by: Davis E. King <davis685@gmail.com>
* wip: layer normalization on cpu
* wip: add cuda implementation, nor working yet
* wip: try to fix cuda implementation
* swap grid_strid_range and grid_strid_range_y: does not work yet
* fix CUDA implementation
* implement cuda gradient
* add documentation, move layer_norm, update bn_visitor
* add tests
* use stddev instead of variance in test (they are both 1, anyway)
* add test for means and invstds on CPU and CUDA
* rename visitor to disable_duplicative_bias
* handle more cases in the visitor_disable_input_bias
* Add tests for visitor_disable_input_bias
Now the user doesn't have to supply a visitor capable of visiting all
layers, but instead just the ones they are interested in. Also added
visit_computational_layers() and visit_computational_layers_range()
since those capture a very common use case more concisely than
visit_layers(). That is, users generally want to mess with the
computational layers specifically as those are the stateful layers.
* add visitor to remove bias from bn_ inputs (#closes 2155)
* remove unused parameter and make documentation more clear
* remove bias from bn_ layers too and use better name
* let the batch norm keep their bias, use even better name
* be more consistent with impl naming
* remove default constructor
* do not use method to prevent some errors
* add disable bias method to pertinent layers
* update dcgan example
- grammar
- print number of network parameters to be able to check bias is not allocated
- at the end, give feedback to the user about what the discriminator thinks about each generated sample
* fix fc_ logic
* add documentation
* add bias_is_disabled methods and update to_xml
* print use_bias=false when bias is disabled
* fix some warnings when running tests
* rever changes in CMakeLists.txt
* update example make use of newly promoted method
* update tests to make use of newly promoted methods