« Previous -
Version 24/65
(diff) -
Next » -
Current version
Grace Vesom, 2016-06-20 06:47 am
Vision Challenge¶
OpenCV is launching a community-wide challenge to update and extend the OpenCV library. An award pool of $50,000 will be provided to the best performing algorithms in the following 11 CV application areas:
- image segmentation
- image registration
- human pose estimation
- SLAM
- multi-view stereo matching
- object recognition
- face recognition
- gesture recognition
- action recognition
- text recognition
- tracking
We prepared code to read from existing data sets in each of these areas: modules/datasets
Conditions:¶
The OpenCV Vision Challenge Committee will judge up to five best entries.
- You may submit a new algorithm developed by yourself.
- You may submit an existing algorithm whether or not developed by yourself (as long as you own or re-implement it yourself).
- Up to 5 winning algorithms will receive $1000 each.
- For an additional $7500 to $9000*, you must submit your winning code as an OpenCV pull request under a BSD or compatible license.
- You acknowledge that your code may be included, with citation, in OpenCV.
You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR.
Winners and prizes are at the sole discretion of the committee.
List of selected datasets and other details described here: OpenCV Vision Challenge
* We will have a professional programmer assist people with their pull requests. Depending on how much total time the programmer takes, the final amount of the prize will be adjusted. The minimum will be $7500 additional dollars for a pull request.
Timeline:¶
Submission Period:
Now - May 15th 2015
Winners Announcement:
June 8th 2015 at CVPR 2015
Contact:¶
Q&A:¶
Q.: What should be in performance evaluation report? Shall we send any report or paper along with the code?
A.: Participants are required to send source code and a performance evaluation report of their algorithms. Report should be in the standard form of a paper with algorithm description. Evaluation should be performed on at least one of the chosen benchmark datasets associated with the building block. Evaluation methodology should be the same as specified by author of each dataset, this includes using the same train\validation\test splits, evaluation metrics, etc. In additional, we ask to report running time of algorithm and platform details to help with their comparison. Algorithm's accuracy should be compared with state-of-the-art algorithms. In addition, itâll be useful to compare it with algorithms implemented in OpenCV whenever possible. Source code and supplied documentation should contain clear description on how to reproduce evaluation results. Source code have to be compiled and run under Ubuntu 14.
Q.: Can I participate in this Vision Challenge by addressing building blocks different from the current 11 categories?
A.: For this Vision Challenge, we have selected 11 categories and 21 supporting datasets. To participate in the Vision Challenge you need to address at least one of the building blocks we have selected and get results in at least one of the chosen associated datasets. Results on additional datasets (e.g., depth channel) will be evaluated accordingly by the awarding committee.
This may be just the first one of a series of challenges and we want to hear from the vision community which building blocks should come next, for the possible next challenges. Please, send your suggestions to our e-mail: [email protected].
- Background Subtraction - 1 vote
- Point Cloud Registration - 1 vote
Q.: Which external algorithms or libraries can we use?
A.: All used 3rd party code should have Permissive free software licence. The most popular such licenses are: BSD, Apache 2.0, MIT.
Q.: I don't find the tracking dataset loading in opencv_contrib/modules/datasets module.
A.: We are not implemented loading-evaluation code for VOT tracking dataset, because it already has its own toolkit.
Q.: Where I can find the Dataset Benchmarks?
A.: They are placed with samples in modules/datasets/samples.