VisionChallenge

Version 33 (Grace Vesom, 2016-06-20 06:47 am)

1 33 Grace Vesom
h1. OpenCV’s People’s Vote Winning Papers and State of the Art Vision Challenge Winners
2 30 Grace Vesom
3 30 Grace Vesom
People’s choice awards for best papers from CVPR 2015 and winning algorithms of the Vision Challenge.
4 30 Grace Vesom
5 30 Grace Vesom
This is a short workshop, one hour before lunch, to announce and describe winners of two separate contests:
6 30 Grace Vesom
7 30 Grace Vesom
<pre>
8 30 Grace Vesom
Location: Room 101 (~123) 
9 30 Grace Vesom
Time: 11am-12pm
10 30 Grace Vesom
</pre>
11 30 Grace Vesom
12 33 Grace Vesom
h2. (1) People’s Choice: Winning Papers CVPR 2015
13 30 Grace Vesom
14 33 Grace Vesom
We will tally the people’s vote for the paper you’d most like to see implemented in CVPR. We’ll present the histogram of results which is an indication of the algorithms people are interested in overall and then list the 5 top winners. We'll also mention the people's choice for best demo.
15 30 Grace Vesom
16 30 Grace Vesom
Prizes will be awarded in two stages: 
17 30 Grace Vesom
* A modest award for winning and 
18 30 Grace Vesom
* a larger award for presenting the code w/in 5 months as a pull request to OpenCV as Detailed here:
19 30 Grace Vesom
** http://code.opencv.org/projects/opencv/wiki/How_to_contribute
20 30 Grace Vesom
21 30 Grace Vesom
*Prizes:*
22 30 Grace Vesom
# Win: $500; Submit code: $6000 
23 30 Grace Vesom
# Win: $300; Submit code: $4000
24 30 Grace Vesom
# Win: $100; Submit code: $3000
25 30 Grace Vesom
# Win: $50; Submit code: $3000
26 30 Grace Vesom
# Win: $50; Submit code: $3000
27 30 Grace Vesom
28 30 Grace Vesom
Results will be listed on OpenCV’s website:
29 30 Grace Vesom
* (user) http://opencv.org/ and 
30 30 Grace Vesom
* (developer) http://code.opencv.org/projects/opencv/wiki 
31 1
32 30 Grace Vesom
h2. (2) State of the Art Vision Challenge
33 31 Grace Vesom
34 33 Grace Vesom
Our aim is to make available state of the art vision in OpenCV. We thus ran a vision challenge to meet or exceed the state of the art in various areas. We will present the results, some of which are quite compelling. The contest details are available at:
35 33 Grace Vesom
36 30 Grace Vesom
http://code.opencv.org/projects/opencv/wiki/VisionChallenge 
37 30 Grace Vesom
38 30 Grace Vesom
*Prizes:*
39 30 Grace Vesom
# Win: $1000; Submit code: $3000 
40 30 Grace Vesom
# Win: $1000; Submit code: $3000
41 30 Grace Vesom
# Win: $1000; Submit code: $3000
42 30 Grace Vesom
# Win: $1000; Submit code: $3000
43 30 Grace Vesom
# Win: $1000; Submit code: $3000
44 30 Grace Vesom
45 30 Grace Vesom
46 30 Grace Vesom
In this contest, if someone does not submit the code, the unclaimed money may be reallocated to those who do at the sole discretion of the prize committee.
47 30 Grace Vesom
48 30 Grace Vesom
49 30 Grace Vesom
h2. Proposers
50 30 Grace Vesom
51 32 Grace Vesom
* *Dr. Gary Rost Bradski,* Chief Scientist, Computer Vision and AI at Magic Leap, Inc.
52 30 Grace Vesom
** [email protected]
53 30 Grace Vesom
* *Vadim Pisarevsky*, Principal Engineer at Itseez
54 30 Grace Vesom
** [email protected]
55 30 Grace Vesom
* *Vincent Rabaud*, Perception Team Manager at Aldebaran Robotics
56 30 Grace Vesom
** [email protected]
57 30 Grace Vesom
* *Grace Vesom*, 3D Vision Senior Engineer at Magic Leap, Inc.
58 30 Grace Vesom
** [email protected]
59 30 Grace Vesom
60 30 Grace Vesom
h2. Presenters
61 30 Grace Vesom
62 30 Grace Vesom
*Dr. Gary Rost Bradski* is Chief Scientist of Computer Vision at Magic Leap.  Gary founded OpenCV at Intel Research in 2000 and is currently CEO of nonprofit OpenCV.org.  He ran the vision team for Stanley, the autonomous vehicle that completed and won the $2M DARPA Grand Challenge robot race across the desert.  Dr. Bradski helped start up NeuroScan (sold to Marmon), Video Surf (sold to Microsoft), and Willow Garage (absorbed into Suitable Tech).  In 2012, he founded Industrial Perception (sold to Google, August 2013).  Gary has more than 100 publications and more than 30 patents and is co-author of a bestseller in its category Learning OpenCV: Computer Vision with the OpenCV Library, O'Reilly Press. 
63 30 Grace Vesom
64 30 Grace Vesom
*Vadim Pisarevsky* is the chief architect of OpenCV.  He graduated from NNSU Cybernetics Department in 1998 with a  Master’s degree in Applied Math.  Afterwards, Vadim worked as software engineer and the team leader of OpenCV project at Intel Corp in 2000-2008.  Since May 2008 he is an employee of Itseez corp and now works full time on OpenCV under a Magic Leap contract.
65 30 Grace Vesom
66 30 Grace Vesom
*Vincent Rabaud* is the perception team manager at Aldebaran Robotics.  He co-founded the non-profit OpenCV.org with Gary Bradski in 2012 while a research engineer at Willow Garage.  His research interests include 3D processing, object recognition and anything that involves underusing CPUs by feeding them fast algorithms.  Dr. Rabaud completed his PhD at UCSD, advised by Serge Belongie.  He also holds a MS in space mechanics and space imagery from SUPAERO and a MS in optimization from the Ecole Polytechnique. 
67 30 Grace Vesom
68 30 Grace Vesom
*Grace Vesom* is a senior engineer in 3D vision at Magic Leap and Director of Development for the OpenCV Foundation.  Previously, she was a research scientist at Lawrence Livermore National Laboratory working on global security applications and completed her DPhil at the University of Oxford in 2010.
69 30 Grace Vesom
70 30 Grace Vesom
71 30 Grace Vesom
72 30 Grace Vesom
73 30 Grace Vesom
74 30 Grace Vesom
75 30 Grace Vesom
76 1
77 30 Grace Vesom
78 31 Grace Vesom
h1. Details of the People's Choice Best Paper -- vote using the CVPR app
79 30 Grace Vesom
80 30 Grace Vesom
81 30 Grace Vesom
82 31 Grace Vesom
h1. Details of the State of the Art Vision Challenge:
83 1
84 29 Grace Vesom
OpenCV is launching a community-wide challenge to update and extend the OpenCV library with state of the art algorithms. An award pool of $20,000 will be provided to the best performing algorithms in the following 11 CV application areas: 
85 1
86 14 Grace Vesom
# image segmentation
87 14 Grace Vesom
# image registration
88 14 Grace Vesom
# human pose estimation
89 14 Grace Vesom
# SLAM
90 14 Grace Vesom
# multi-view stereo matching
91 14 Grace Vesom
# object recognition
92 14 Grace Vesom
# face recognition
93 14 Grace Vesom
# gesture recognition
94 14 Grace Vesom
# action recognition
95 14 Grace Vesom
# text recognition
96 14 Grace Vesom
# tracking
97 8 Grace Vesom
98 22 Grace Vesom
We prepared code to read from existing data sets in each of these areas: "modules/datasets":http://docs.opencv.org/master/modules/datasets/doc/datasets.html
99 10 Grace Vesom
100 1
h2. Conditions:
101 6 Grace Vesom
102 1
The OpenCV Vision Challenge Committee will judge up to five best entries. 
103 1
104 14 Grace Vesom
# You may submit a new algorithm developed by yourself.
105 14 Grace Vesom
# You may submit an existing algorithm *whether or not developed by yourself* (as long as you own or re-implement it yourself).
106 14 Grace Vesom
# Up to 5 winning algorithms will receive $1000 each.
107 29 Grace Vesom
# For an additional $3000 to $15,000*, you must submit your winning code as an OpenCV pull request under a BSD or compatible license.
108 16 Grace Vesom
** You acknowledge that your code may be included, with citation, in OpenCV.
109 6 Grace Vesom
110 6 Grace Vesom
You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR.
111 6 Grace Vesom
112 1
Winners and prizes are at the sole discretion of the committee.
113 14 Grace Vesom
114 22 Grace Vesom
*List of selected datasets and other details described here:* "OpenCV Vision Challenge":http://code.opencv.org/attachments/1672/OpenCVVisionChallenge.pdf
115 14 Grace Vesom
116 29 Grace Vesom
??* We will have a professional programmer assist people with their pull requests. The final amount will be adusted by number of pull requests. The minimum will be $3000 additional dollars for a pull request. The prize committee may adjust the amounts upwards depending on remaining budget at the commitees sole discretion??
117 11 Grace Vesom
118 6 Grace Vesom
h2. Timeline:
119 1
120 1
*Submission Period:*
121 15 Grace Vesom
_Now - May 15th 2015_
122 1
123 1
*Winners Announcement:* 
124 1
_June 8th 2015 at CVPR 2015_
125 1
126 10 Grace Vesom
h2. Contact:
127 1
128 1
[email protected]
129 20 Grace Vesom
130 20 Grace Vesom
h2. Q&A:
131 20 Grace Vesom
132 23 Grace Vesom
*Q.:* _What should be in performance evaluation report? Shall we send any report or paper along with the code?_
133 23 Grace Vesom
*A.:* Participants are required to send source code and a performance evaluation report of their algorithms. Report should be in the standard form of a paper with algorithm description. Evaluation should be performed on at least one of the chosen benchmark datasets associated with the building block. Evaluation methodology should be the same as specified by author of each dataset, this includes using the same train\validation\test splits, evaluation metrics, etc. In additional, we ask to report running time of algorithm and platform details to help with their comparison. Algorithm's accuracy should be compared with state-of-the-art algorithms. In addition, it’ll be useful to compare it with algorithms implemented in OpenCV whenever possible. Source code and supplied documentation should contain clear description on how to reproduce evaluation results. Source code have to be compiled and run under Ubuntu 14.
134 23 Grace Vesom
135 20 Grace Vesom
*Q.:* _Can I participate in this Vision Challenge by addressing building blocks different from the current 11 categories?_
136 20 Grace Vesom
*A.:* For this Vision Challenge, we have selected 11 categories and 21 supporting datasets. To participate in the Vision Challenge you need to address at least one of the building blocks we have selected and get results in at least one of the chosen associated datasets. Results on additional datasets (e.g., depth channel) will be evaluated accordingly by the awarding committee.
137 20 Grace Vesom
This may be just the first one of a series of challenges and we want to hear from the vision community which building blocks should come next, for the possible next challenges. Please, send your suggestions to our e-mail: [email protected].
138 20 Grace Vesom
139 20 Grace Vesom
*Current propositions list:*
140 21 Grace Vesom
* Background Subtraction - 1 vote
141 21 Grace Vesom
* Point Cloud Registration - 1 vote
142 26 Grace Vesom
* Pedestrian Detection - 1 vote
143 27 Grace Vesom
* Text Recognition for Arabic language - 1 vote
144 20 Grace Vesom
145 20 Grace Vesom
*Q.:* _Which external algorithms or libraries can we use?_
146 20 Grace Vesom
*A.:* All used 3rd party code should have Permissive free software licence. The most popular such licenses are: BSD, Apache 2.0, MIT.
147 20 Grace Vesom
148 20 Grace Vesom
*Q.:* _I don't find the tracking dataset loading in opencv_contrib/modules/datasets module._
149 20 Grace Vesom
*A.:* We are not implemented loading-evaluation code for VOT tracking dataset, because it already has its own "toolkit":http://www.votchallenge.net/vot2014/participation.html.
150 1
151 24 Grace Vesom
*Q.:* _Where I can find the Dataset Benchmarks?_
152 25 Grace Vesom
*A.:* They are placed with samples in "modules/datasets/samples":https://github.com/Itseez/opencv_contrib/tree/master/modules/datasets/samples.
153 24 Grace Vesom
154 1
h3. Back to Developer page:
155 1
156 1
"OpenCV":http://code.opencv.org/projects/opencv/wiki