VisionChallenge

Version 53 (Grace Vesom, 2016-06-20 06:47 am)

1 33 Grace Vesom
h1. OpenCV’s People’s Vote Winning Papers and State of the Art Vision Challenge Winners
2 30 Grace Vesom
3 51 Grace Vesom
* Sponsored by Intel*
4 51 Grace Vesom
5 34 Grace Vesom
This is a 2-for-1 "CVPR 2015 Workshop":http://www.pamitc.org/cvpr15/workshops.php covering
6 36 Grace Vesom
* People’s choice awards for winning papers from CVPR 2015 
7 36 Grace Vesom
** _Vote on the CVPR 2015 papers that you most want to see implemented and we'll pay the winners to implement it in_ "opencv_contrib":https://github.com/Itseez/opencv_contrib
8 34 Grace Vesom
* Winning algorithms of the OpenCV Vision Challenge
9 37 Grace Vesom
** _Start collecting implementations of the best in class algorithms in_ "opencv_contrib":https://github.com/Itseez/opencv_contrib
10 30 Grace Vesom
11 30 Grace Vesom
This is a short workshop, one hour before lunch, to announce and describe winners of two separate contests:
12 30 Grace Vesom
13 30 Grace Vesom
<pre>
14 37 Grace Vesom
Location: Room 101
15 30 Grace Vesom
Time: 11am-12pm
16 30 Grace Vesom
</pre>
17 30 Grace Vesom
18 46 Grace Vesom
h2. (1) CV Community Top Paper Award: CVPR 2015
19 30 Grace Vesom
20 46 Grace Vesom
We will tally the people’s vote !Vote5.png! for the paper you’d most like to see implemented (you may vote for as many as you want). We'll describe the 5 top winners. 
21 30 Grace Vesom
22 30 Grace Vesom
Prizes will be awarded in two stages: 
23 30 Grace Vesom
* A modest award for winning and 
24 45 Grace Vesom
* a larger award for presenting the code as a pull request to OpenCV by December 1st 2015 as Detailed here:
25 30 Grace Vesom
** http://code.opencv.org/projects/opencv/wiki/How_to_contribute
26 30 Grace Vesom
27 30 Grace Vesom
*Prizes:*
28 48 Grace Vesom
* 1st winner: $500; Submit code: $6000 
29 48 Grace Vesom
* 2nd winner: $300; Submit code: $4000
30 48 Grace Vesom
* 3rd winner: $100; Submit code: $3000
31 48 Grace Vesom
* 4th winner: $50; Submit code: $3000
32 48 Grace Vesom
* 5th winner: $50; Submit code: $3000
33 49 Grace Vesom
* 6th winner: $50: Submit code: $2000
34 1
35 51 Grace Vesom
h3. *Winners:*
36 49 Grace Vesom
37 53 Grace Vesom
TBA
38 30 Grace Vesom
39 30 Grace Vesom
Results will be listed on OpenCV’s website:
40 30 Grace Vesom
* (user) http://opencv.org/ and 
41 30 Grace Vesom
* (developer) http://code.opencv.org/projects/opencv/wiki 
42 1
43 30 Grace Vesom
h2. (2) State of the Art Vision Challenge
44 31 Grace Vesom
45 33 Grace Vesom
Our aim is to make available state of the art vision in OpenCV. We thus ran a vision challenge to meet or exceed the state of the art in various areas. We will present the results, some of which are quite compelling. The contest details are available at:
46 33 Grace Vesom
47 30 Grace Vesom
http://code.opencv.org/projects/opencv/wiki/VisionChallenge 
48 30 Grace Vesom
49 30 Grace Vesom
*Prizes:*
50 30 Grace Vesom
# Win: $1000; Submit code: $3000 
51 30 Grace Vesom
# Win: $1000; Submit code: $3000
52 30 Grace Vesom
# Win: $1000; Submit code: $3000
53 30 Grace Vesom
# Win: $1000; Submit code: $3000
54 30 Grace Vesom
# Win: $1000; Submit code: $3000
55 1
56 1
57 51 Grace Vesom
h3. *Winners by Categories:*
58 51 Grace Vesom
59 51 Grace Vesom
*Tracking*
60 51 Grace Vesom
* Martin Danelljan, Gustav Häger, Fahad Shahbaz Khan, Michael Felsberg - DCFSIR, DSST, fDSST
61 51 Grace Vesom
** Took first place on more correct general table and also has modifications with worse performance, but faster speed. 
62 51 Grace Vesom
** No pull request yet
63 51 Grace Vesom
64 51 Grace Vesom
*Image registration*	
65 51 Grace Vesom
* Gil Levi and Tal Hassner - LATCH descriptor
66 51 Grace Vesom
**  State-of-the-art for binary descriptors for accuracy (its speed slower). It also outperforms several floating point descriptors and works much faster.
67 51 Grace Vesom
** Pull request done!
68 51 Grace Vesom
69 51 Grace Vesom
* Olexa Bilaniuk, Hamid Bazargani, and Robert Laganière - RHO for findHomography()
70 51 Grace Vesom
** Comparable results to RANSAC with 25x speedup.
71 51 Grace Vesom
** Already in OpenCV!
72 51 Grace Vesom
73 51 Grace Vesom
*Image segmentation*
74 51 Grace Vesom
* Beat Kueng, Alex Locher, Michael Van den Bergh, Gemma Roig, Xavier Boix, Luc Van Gool - SEEDS Superpixel
75 51 Grace Vesom
** State-of-the-art superpixel algorithm in terms of accuracy and computational performance
76 51 Grace Vesom
** Already in OpenCV!
77 51 Grace Vesom
78 51 Grace Vesom
*Gesture recognition*
79 52 Grace Vesom
* Natalia Neverova, Christian Wolf, Graham Taylor and Florian Nebout - ModDrop: adaptive multi-modal gesture recognition
80 51 Grace Vesom
** State-of-the-art for ChaLearn 2014 gesture challenge (ChaLearn Looking at People (ECCV 2014).
81 51 Grace Vesom
** No pull request. Will be dependent on GSoC’s 2015 ability to read in any deep net and run it.
82 30 Grace Vesom
83 30 Grace Vesom
84 30 Grace Vesom
h2. Proposers
85 30 Grace Vesom
86 32 Grace Vesom
* *Dr. Gary Rost Bradski,* Chief Scientist, Computer Vision and AI at Magic Leap, Inc.
87 30 Grace Vesom
** [email protected]
88 30 Grace Vesom
* *Vadim Pisarevsky*, Principal Engineer at Itseez
89 30 Grace Vesom
** [email protected]
90 30 Grace Vesom
* *Vincent Rabaud*, Perception Team Manager at Aldebaran Robotics
91 30 Grace Vesom
** [email protected]
92 30 Grace Vesom
* *Grace Vesom*, 3D Vision Senior Engineer at Magic Leap, Inc.
93 30 Grace Vesom
** [email protected]
94 30 Grace Vesom
95 30 Grace Vesom
h2. Presenters
96 30 Grace Vesom
97 30 Grace Vesom
*Dr. Gary Rost Bradski* is Chief Scientist of Computer Vision at Magic Leap.  Gary founded OpenCV at Intel Research in 2000 and is currently CEO of nonprofit OpenCV.org.  He ran the vision team for Stanley, the autonomous vehicle that completed and won the $2M DARPA Grand Challenge robot race across the desert.  Dr. Bradski helped start up NeuroScan (sold to Marmon), Video Surf (sold to Microsoft), and Willow Garage (absorbed into Suitable Tech).  In 2012, he founded Industrial Perception (sold to Google, August 2013).  Gary has more than 100 publications and more than 30 patents and is co-author of a bestseller in its category Learning OpenCV: Computer Vision with the OpenCV Library, O'Reilly Press. 
98 30 Grace Vesom
99 30 Grace Vesom
*Vadim Pisarevsky* is the chief architect of OpenCV.  He graduated from NNSU Cybernetics Department in 1998 with a  Master’s degree in Applied Math.  Afterwards, Vadim worked as software engineer and the team leader of OpenCV project at Intel Corp in 2000-2008.  Since May 2008 he is an employee of Itseez corp and now works full time on OpenCV under a Magic Leap contract.
100 30 Grace Vesom
101 30 Grace Vesom
*Vincent Rabaud* is the perception team manager at Aldebaran Robotics.  He co-founded the non-profit OpenCV.org with Gary Bradski in 2012 while a research engineer at Willow Garage.  His research interests include 3D processing, object recognition and anything that involves underusing CPUs by feeding them fast algorithms.  Dr. Rabaud completed his PhD at UCSD, advised by Serge Belongie.  He also holds a MS in space mechanics and space imagery from SUPAERO and a MS in optimization from the Ecole Polytechnique. 
102 30 Grace Vesom
103 30 Grace Vesom
*Grace Vesom* is a senior engineer in 3D vision at Magic Leap and Director of Development for the OpenCV Foundation.  Previously, she was a research scientist at Lawrence Livermore National Laboratory working on global security applications and completed her DPhil at the University of Oxford in 2010.
104 30 Grace Vesom
105 30 Grace Vesom
106 39 Grace Vesom
h1. (1) Details of the People's Choice Winning Paper -- vote using the CVPR app
107 35 Grace Vesom
108 39 Grace Vesom
You may vote for as many papers as you want. This is the people's choice of the algorithms they'd most like to have. We will award the winners and encourage them to submit their working code to "opencv_contrib":https://github.com/Itseez/opencv_contrib
109 1
110 39 Grace Vesom
h2. Voting instructions:
111 1
112 39 Grace Vesom
On your phone, 
113 39 Grace Vesom
* search for the official CVPR app, it's name is CVF (Computer Vision Foundation). Down load it. 
114 39 Grace Vesom
* on your computer, go to the online app "http://www.cvfapp.org":http://www.cvfapp.org/
115 39 Grace Vesom
116 39 Grace Vesom
Open the app, you'll have to log in the first time.
117 39 Grace Vesom
118 39 Grace Vesom
On the homescreen:
119 39 Grace Vesom
120 40 Grace Vesom
!App11.png!
121 39 Grace Vesom
122 42 Grace Vesom
Click on the  the calendar view (lower left blue icon), then scroll down to the day your are on:
123 41 Grace Vesom
124 41 Grace Vesom
!Day22.png!
125 41 Grace Vesom
126 41 Grace Vesom
Find the talk, paper, demo that you like:
127 41 Grace Vesom
128 41 Grace Vesom
!Chose33.png!
129 41 Grace Vesom
130 41 Grace Vesom
Click on the PeoplesChoiceVote square:
131 41 Grace Vesom
132 41 Grace Vesom
!Vote44.png!
133 41 Grace Vesom
134 43 Grace Vesom
We will record this vote. Vote for the papers you'd really love/need to see implementation. We will award extra money to encourage people to submit working code for these popular algorithms!
135 39 Grace Vesom
136 39 Grace Vesom
h1. (2) Details of the State of the Art Vision Challenge:
137 1
138 29 Grace Vesom
OpenCV is launching a community-wide challenge to update and extend the OpenCV library with state of the art algorithms. An award pool of $20,000 will be provided to the best performing algorithms in the following 11 CV application areas: 
139 1
140 14 Grace Vesom
# image segmentation
141 14 Grace Vesom
# image registration
142 14 Grace Vesom
# human pose estimation
143 14 Grace Vesom
# SLAM
144 14 Grace Vesom
# multi-view stereo matching
145 14 Grace Vesom
# object recognition
146 14 Grace Vesom
# face recognition
147 14 Grace Vesom
# gesture recognition
148 14 Grace Vesom
# action recognition
149 14 Grace Vesom
# text recognition
150 14 Grace Vesom
# tracking
151 8 Grace Vesom
152 22 Grace Vesom
We prepared code to read from existing data sets in each of these areas: "modules/datasets":http://docs.opencv.org/master/modules/datasets/doc/datasets.html
153 10 Grace Vesom
154 1
h2. Conditions:
155 6 Grace Vesom
156 1
The OpenCV Vision Challenge Committee will judge up to five best entries. 
157 1
158 14 Grace Vesom
# You may submit a new algorithm developed by yourself.
159 14 Grace Vesom
# You may submit an existing algorithm *whether or not developed by yourself* (as long as you own or re-implement it yourself).
160 14 Grace Vesom
# Up to 5 winning algorithms will receive $1000 each.
161 44 Grace Vesom
# For an additional $3000 to $15,000*, you must submit your winning code as an OpenCV pull request under a BSD or compatible license by December 1st.
162 16 Grace Vesom
** You acknowledge that your code may be included, with citation, in OpenCV.
163 6 Grace Vesom
164 6 Grace Vesom
You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR.
165 6 Grace Vesom
166 1
Winners and prizes are at the sole discretion of the committee.
167 14 Grace Vesom
168 22 Grace Vesom
*List of selected datasets and other details described here:* "OpenCV Vision Challenge":http://code.opencv.org/attachments/1672/OpenCVVisionChallenge.pdf
169 14 Grace Vesom
170 29 Grace Vesom
??* We will have a professional programmer assist people with their pull requests. The final amount will be adusted by number of pull requests. The minimum will be $3000 additional dollars for a pull request. The prize committee may adjust the amounts upwards depending on remaining budget at the commitees sole discretion??
171 11 Grace Vesom
172 6 Grace Vesom
h2. Timeline:
173 1
174 1
*Submission Period:*
175 15 Grace Vesom
_Now - May 15th 2015_
176 1
177 1
*Winners Announcement:* 
178 1
_June 8th 2015 at CVPR 2015_
179 44 Grace Vesom
180 44 Grace Vesom
*Submit pull request:*
181 44 Grace Vesom
_December 1st, 2015_
182 1
183 10 Grace Vesom
h2. Contact:
184 1
185 1
[email protected]
186 20 Grace Vesom
187 20 Grace Vesom
h2. Q&A:
188 20 Grace Vesom
189 23 Grace Vesom
*Q.:* _What should be in performance evaluation report? Shall we send any report or paper along with the code?_
190 23 Grace Vesom
*A.:* Participants are required to send source code and a performance evaluation report of their algorithms. Report should be in the standard form of a paper with algorithm description. Evaluation should be performed on at least one of the chosen benchmark datasets associated with the building block. Evaluation methodology should be the same as specified by author of each dataset, this includes using the same train\validation\test splits, evaluation metrics, etc. In additional, we ask to report running time of algorithm and platform details to help with their comparison. Algorithm's accuracy should be compared with state-of-the-art algorithms. In addition, it’ll be useful to compare it with algorithms implemented in OpenCV whenever possible. Source code and supplied documentation should contain clear description on how to reproduce evaluation results. Source code have to be compiled and run under Ubuntu 14.
191 23 Grace Vesom
192 20 Grace Vesom
*Q.:* _Can I participate in this Vision Challenge by addressing building blocks different from the current 11 categories?_
193 20 Grace Vesom
*A.:* For this Vision Challenge, we have selected 11 categories and 21 supporting datasets. To participate in the Vision Challenge you need to address at least one of the building blocks we have selected and get results in at least one of the chosen associated datasets. Results on additional datasets (e.g., depth channel) will be evaluated accordingly by the awarding committee.
194 20 Grace Vesom
This may be just the first one of a series of challenges and we want to hear from the vision community which building blocks should come next, for the possible next challenges. Please, send your suggestions to our e-mail: [email protected].
195 20 Grace Vesom
196 20 Grace Vesom
*Current propositions list:*
197 21 Grace Vesom
* Background Subtraction - 1 vote
198 21 Grace Vesom
* Point Cloud Registration - 1 vote
199 26 Grace Vesom
* Pedestrian Detection - 1 vote
200 27 Grace Vesom
* Text Recognition for Arabic language - 1 vote
201 20 Grace Vesom
202 20 Grace Vesom
*Q.:* _Which external algorithms or libraries can we use?_
203 20 Grace Vesom
*A.:* All used 3rd party code should have Permissive free software licence. The most popular such licenses are: BSD, Apache 2.0, MIT.
204 20 Grace Vesom
205 20 Grace Vesom
*Q.:* _I don't find the tracking dataset loading in opencv_contrib/modules/datasets module._
206 20 Grace Vesom
*A.:* We are not implemented loading-evaluation code for VOT tracking dataset, because it already has its own "toolkit":http://www.votchallenge.net/vot2014/participation.html.
207 1
208 24 Grace Vesom
*Q.:* _Where I can find the Dataset Benchmarks?_
209 25 Grace Vesom
*A.:* They are placed with samples in "modules/datasets/samples":https://github.com/Itseez/opencv_contrib/tree/master/modules/datasets/samples.
210 24 Grace Vesom
211 1
h3. Back to Developer page:
212 1
213 1
"OpenCV":http://code.opencv.org/projects/opencv/wiki