diff --git a/examples/svm_ex.cpp b/examples/svm_ex.cpp index d6fbe4fec..1a8d9d737 100644 --- a/examples/svm_ex.cpp +++ b/examples/svm_ex.cpp @@ -220,15 +220,15 @@ int main() // Lastly, note that the decision functions we trained above involved well over 100 - // support vectors. Support vector machines in general tend to find decision functions - // that involve a lot of support vectors. This is significant because the more - // support vectors in a decision function, the longer it takes to classify new examples. + // basis vectors. Support vector machines in general tend to find decision functions + // that involve a lot of basis vectors. This is significant because the more + // basis vectors in a decision function, the longer it takes to classify new examples. // So dlib provides the ability to find an approximation to the normal output of a - // support vector machine using fewer support vectors. + // trainer using fewer basis vectors. // Here we determine the cross validation accuracy when we approximate the output - // using only 10 support vectors. To do this we use the reduced2() function. It - // takes a trainer object and the number of support vectors to use and returns + // using only 10 basis vectors. To do this we use the reduced2() function. It + // takes a trainer object and the number of basis vectors to use and returns // a new trainer object that applies the necessary post processing during the creation // of decision function objects. cout << "\ncross validation accuracy with only 10 support vectors: " @@ -239,7 +239,7 @@ int main() << cross_validate_trainer(trainer, samples, labels, 3); // When you run this program you should see that, for this problem, you can reduce - // the number of support vectors down to 10 without hurting the cross validation + // the number of basis vectors down to 10 without hurting the cross validation // accuracy.