mirror of
https://github.com/davisking/dlib.git
synced 2024-11-01 10:14:53 +08:00
Improved citations
This commit is contained in:
parent
205b26f831
commit
8de1a1ed6a
@ -8,13 +8,23 @@
|
||||
|
||||
|
||||
|
||||
This face detector is made using the classic Histogram of Oriented
|
||||
The face detector we use is made using the classic Histogram of Oriented
|
||||
Gradients (HOG) feature combined with a linear classifier, an image pyramid,
|
||||
and sliding window detection scheme. The pose estimator was created by
|
||||
using dlib's implementation of the paper:
|
||||
One Millisecond Face Alignment with an Ensemble of Regression Trees by
|
||||
Vahid Kazemi and Josephine Sullivan, CVPR 2014
|
||||
and was trained on the iBUG 300-W face landmark dataset.
|
||||
One Millisecond Face Alignment with an Ensemble of Regression Trees by
|
||||
Vahid Kazemi and Josephine Sullivan, CVPR 2014
|
||||
and was trained on the iBUG 300-W face landmark dataset (see
|
||||
https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/):
|
||||
C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic.
|
||||
300 faces In-the-wild challenge: Database and results.
|
||||
Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.
|
||||
You can get the trained model file from:
|
||||
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2.
|
||||
Note that the license for the iBUG 300-W dataset excludes commercial use.
|
||||
So you should contact Imperial College London to find out if it's OK for
|
||||
you use use this model file in a commercial product.
|
||||
|
||||
|
||||
Also, note that you can train your own models using dlib's machine learning
|
||||
tools. See train_shape_predictor_ex.cpp to see an example.
|
||||
|
Loading…
Reference in New Issue
Block a user