CascadeClassifier GPU / CPU Detection Mismatches (Bug #2067)
Description
Specs:
This is under OpenCV 2.4.1 with Linux x86_64 (arch) built on GCC 4.7.1 using TBB, Cuda, and Eigen.
Issue:
The CPU and GPU CascadeClassifiers report different results when loaded with the same classifier. This was tested under multiple video cards; most recently the GTX 580.
Test Code:
I have put together some test code, the classifier, a test image, and the expected results: http://www.gitpaste.com/paste/720/
The classifier was built with the old Haar trainer.
-- Justin
Associated revisions
Merge pull request #2067 from pemmanuelviel:groupWiseCenterChooser
History
Updated by Kirill Kornyakov over 12 years ago
- Category changed from objdetect to gpu (cuda)
Updated by Marina Kolpakova over 12 years ago
GPU implementation is not a "parallelized" copy of CPU implementation. They can produce different results on the same input.
For instance, GPU cascade supports only integer scaling factors (if you choose 1.2 scaling factor then CPU version of the classifier will be launched for the scales 1.0, 1.2, 1.4, 1.7, 2.0, 2.5 and so on, while the GPU version will work only for 1.0, 2.0, 3.0). Maybe your problem is that the scale on which CPU version detects your desired objects is not supported by the GPU implementation. I guess it needs to be documented.
- Assignee set to Marina Kolpakova
Updated by Justin Van Horne over 12 years ago
I was under the impression scaling must be a factor > 1; so this will not be true for GPU implementation?
Updated by Marina Kolpakova over 12 years ago
I'm trying to say that lower detection rate is a feature of current implementation. The GPU Haar classifier is able to detect objects on integer scales. So closing this ticket as a duplicate of #1525 feature request.
- Status changed from Open to Cancelled
- Target version set to 3.0