CascadeClassifier GPU / CPU Detection Mismatches (Bug #2067)
This is under OpenCV 2.4.1 with Linux x86_64 (arch) built on GCC 4.7.1 using TBB, Cuda, and Eigen.
The CPU and GPU CascadeClassifiers report different results when loaded with the same classifier. This was tested under multiple video cards; most recently the GTX 580.
I have put together some test code, the classifier, a test image, and the expected results: http://www.gitpaste.com/paste/720/
The classifier was built with the old Haar trainer.
|duplicates Feature #1525: output of GPU detectMultiScale returns multiple of detect...||Open|
GPU implementation is not a "parallelized" copy of CPU implementation. They can produce different results on the same input.
For instance, GPU cascade supports only integer scaling factors (if you choose 1.2 scaling factor then CPU version of the classifier will be launched for the scales 1.0, 1.2, 1.4, 1.7, 2.0, 2.5 and so on, while the GPU version will work only for 1.0, 2.0, 3.0). Maybe your problem is that the scale on which CPU version detects your desired objects is not supported by the GPU implementation. I guess it needs to be documented.
- Assignee set to Marina Kolpakova
I'm trying to say that lower detection rate is a feature of current implementation. The GPU Haar classifier is able to detect objects on integer scales. So closing this ticket as a duplicate of #1525 feature request.
- Status changed from Open to Cancelled
- Target version set to 3.0