VisionChallenge

Version 29 (Grace Vesom, 2016-06-20 06:47 am)

1 29 Grace Vesom
h1. State of the Art Vision Challenge 
2 1
3 29 Grace Vesom
OpenCV is launching a community-wide challenge to update and extend the OpenCV library with state of the art algorithms. An award pool of $20,000 will be provided to the best performing algorithms in the following 11 CV application areas: 
4 1
5 14 Grace Vesom
# image segmentation
6 14 Grace Vesom
# image registration
7 14 Grace Vesom
# human pose estimation
8 14 Grace Vesom
# SLAM
9 14 Grace Vesom
# multi-view stereo matching
10 14 Grace Vesom
# object recognition
11 14 Grace Vesom
# face recognition
12 14 Grace Vesom
# gesture recognition
13 14 Grace Vesom
# action recognition
14 14 Grace Vesom
# text recognition
15 14 Grace Vesom
# tracking
16 8 Grace Vesom
17 22 Grace Vesom
We prepared code to read from existing data sets in each of these areas: "modules/datasets":http://docs.opencv.org/master/modules/datasets/doc/datasets.html
18 10 Grace Vesom
19 1
h2. Conditions:
20 6 Grace Vesom
21 1
The OpenCV Vision Challenge Committee will judge up to five best entries. 
22 1
23 14 Grace Vesom
# You may submit a new algorithm developed by yourself.
24 14 Grace Vesom
# You may submit an existing algorithm *whether or not developed by yourself* (as long as you own or re-implement it yourself).
25 14 Grace Vesom
# Up to 5 winning algorithms will receive $1000 each.
26 29 Grace Vesom
# For an additional $3000 to $15,000*, you must submit your winning code as an OpenCV pull request under a BSD or compatible license.
27 16 Grace Vesom
** You acknowledge that your code may be included, with citation, in OpenCV.
28 6 Grace Vesom
29 6 Grace Vesom
You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR.
30 6 Grace Vesom
31 1
Winners and prizes are at the sole discretion of the committee.
32 14 Grace Vesom
33 22 Grace Vesom
*List of selected datasets and other details described here:* "OpenCV Vision Challenge":http://code.opencv.org/attachments/1672/OpenCVVisionChallenge.pdf
34 14 Grace Vesom
35 29 Grace Vesom
??* We will have a professional programmer assist people with their pull requests. The final amount will be adusted by number of pull requests. The minimum will be $3000 additional dollars for a pull request. The prize committee may adjust the amounts upwards depending on remaining budget at the commitees sole discretion??
36 11 Grace Vesom
37 6 Grace Vesom
h2. Timeline:
38 1
39 1
*Submission Period:*
40 15 Grace Vesom
_Now - May 15th 2015_
41 1
42 1
*Winners Announcement:* 
43 1
_June 8th 2015 at CVPR 2015_
44 1
45 10 Grace Vesom
h2. Contact:
46 1
47 1
[email protected]
48 20 Grace Vesom
49 20 Grace Vesom
h2. Q&A:
50 20 Grace Vesom
51 23 Grace Vesom
*Q.:* _What should be in performance evaluation report? Shall we send any report or paper along with the code?_
52 23 Grace Vesom
*A.:* Participants are required to send source code and a performance evaluation report of their algorithms. Report should be in the standard form of a paper with algorithm description. Evaluation should be performed on at least one of the chosen benchmark datasets associated with the building block. Evaluation methodology should be the same as specified by author of each dataset, this includes using the same train\validation\test splits, evaluation metrics, etc. In additional, we ask to report running time of algorithm and platform details to help with their comparison. Algorithm's accuracy should be compared with state-of-the-art algorithms. In addition, it’ll be useful to compare it with algorithms implemented in OpenCV whenever possible. Source code and supplied documentation should contain clear description on how to reproduce evaluation results. Source code have to be compiled and run under Ubuntu 14.
53 23 Grace Vesom
54 20 Grace Vesom
*Q.:* _Can I participate in this Vision Challenge by addressing building blocks different from the current 11 categories?_
55 20 Grace Vesom
*A.:* For this Vision Challenge, we have selected 11 categories and 21 supporting datasets. To participate in the Vision Challenge you need to address at least one of the building blocks we have selected and get results in at least one of the chosen associated datasets. Results on additional datasets (e.g., depth channel) will be evaluated accordingly by the awarding committee.
56 20 Grace Vesom
This may be just the first one of a series of challenges and we want to hear from the vision community which building blocks should come next, for the possible next challenges. Please, send your suggestions to our e-mail: [email protected].
57 20 Grace Vesom
58 20 Grace Vesom
*Current propositions list:*
59 21 Grace Vesom
* Background Subtraction - 1 vote
60 21 Grace Vesom
* Point Cloud Registration - 1 vote
61 26 Grace Vesom
* Pedestrian Detection - 1 vote
62 27 Grace Vesom
* Text Recognition for Arabic language - 1 vote
63 20 Grace Vesom
64 20 Grace Vesom
*Q.:* _Which external algorithms or libraries can we use?_
65 20 Grace Vesom
*A.:* All used 3rd party code should have Permissive free software licence. The most popular such licenses are: BSD, Apache 2.0, MIT.
66 20 Grace Vesom
67 20 Grace Vesom
*Q.:* _I don't find the tracking dataset loading in opencv_contrib/modules/datasets module._
68 20 Grace Vesom
*A.:* We are not implemented loading-evaluation code for VOT tracking dataset, because it already has its own "toolkit":http://www.votchallenge.net/vot2014/participation.html.
69 1
70 24 Grace Vesom
*Q.:* _Where I can find the Dataset Benchmarks?_
71 25 Grace Vesom
*A.:* They are placed with samples in "modules/datasets/samples":https://github.com/Itseez/opencv_contrib/tree/master/modules/datasets/samples.
72 24 Grace Vesom
73 1
h3. Back to Developer page:
74 1
75 1
"OpenCV":http://code.opencv.org/projects/opencv/wiki