VisionChallenge
Version 47 (Grace Vesom, 2016-06-20 06:47 am)
1 | 33 | Grace Vesom | h1. OpenCVâs Peopleâs Vote Winning Papers and State of the Art Vision Challenge Winners |
---|---|---|---|
2 | 30 | Grace Vesom | |
3 | 34 | Grace Vesom | This is a 2-for-1 "CVPR 2015 Workshop":http://www.pamitc.org/cvpr15/workshops.php covering |
4 | 36 | Grace Vesom | * Peopleâs choice awards for winning papers from CVPR 2015 |
5 | 36 | Grace Vesom | ** _Vote on the CVPR 2015 papers that you most want to see implemented and we'll pay the winners to implement it in_ "opencv_contrib":https://github.com/Itseez/opencv_contrib |
6 | 34 | Grace Vesom | * Winning algorithms of the OpenCV Vision Challenge |
7 | 37 | Grace Vesom | ** _Start collecting implementations of the best in class algorithms in_ "opencv_contrib":https://github.com/Itseez/opencv_contrib |
8 | 30 | Grace Vesom | |
9 | 30 | Grace Vesom | This is a short workshop, one hour before lunch, to announce and describe winners of two separate contests: |
10 | 30 | Grace Vesom | |
11 | 30 | Grace Vesom | <pre> |
12 | 37 | Grace Vesom | Location: Room 101 |
13 | 30 | Grace Vesom | Time: 11am-12pm |
14 | 30 | Grace Vesom | </pre> |
15 | 30 | Grace Vesom | |
16 | 46 | Grace Vesom | h2. (1) CV Community Top Paper Award: CVPR 2015 |
17 | 30 | Grace Vesom | |
18 | 46 | Grace Vesom | We will tally the peopleâs vote !Vote5.png! for the paper youâd most like to see implemented (you may vote for as many as you want). We'll describe the 5 top winners. |
19 | 30 | Grace Vesom | |
20 | 30 | Grace Vesom | Prizes will be awarded in two stages: |
21 | 30 | Grace Vesom | * A modest award for winning and |
22 | 45 | Grace Vesom | * a larger award for presenting the code as a pull request to OpenCV by December 1st 2015 as Detailed here: |
23 | 30 | Grace Vesom | ** http://code.opencv.org/projects/opencv/wiki/How_to_contribute |
24 | 30 | Grace Vesom | |
25 | 30 | Grace Vesom | *Prizes:* |
26 | 47 | Grace Vesom | # winner: $500; Submit code: $6000 |
27 | 47 | Grace Vesom | # winner: $300; Submit code: $4000 |
28 | 47 | Grace Vesom | # winne: $100; Submit code: $3000 |
29 | 47 | Grace Vesom | # winner: $50; Submit code: $3000 |
30 | 47 | Grace Vesom | # winner: $50; Submit code: $3000 |
31 | 30 | Grace Vesom | |
32 | 30 | Grace Vesom | Results will be listed on OpenCVâs website: |
33 | 30 | Grace Vesom | * (user) http://opencv.org/ and |
34 | 30 | Grace Vesom | * (developer) http://code.opencv.org/projects/opencv/wiki |
35 | 1 | ||
36 | 30 | Grace Vesom | h2. (2) State of the Art Vision Challenge |
37 | 31 | Grace Vesom | |
38 | 33 | Grace Vesom | Our aim is to make available state of the art vision in OpenCV. We thus ran a vision challenge to meet or exceed the state of the art in various areas. We will present the results, some of which are quite compelling. The contest details are available at: |
39 | 33 | Grace Vesom | |
40 | 30 | Grace Vesom | http://code.opencv.org/projects/opencv/wiki/VisionChallenge |
41 | 30 | Grace Vesom | |
42 | 30 | Grace Vesom | *Prizes:* |
43 | 30 | Grace Vesom | # Win: $1000; Submit code: $3000 |
44 | 30 | Grace Vesom | # Win: $1000; Submit code: $3000 |
45 | 30 | Grace Vesom | # Win: $1000; Submit code: $3000 |
46 | 30 | Grace Vesom | # Win: $1000; Submit code: $3000 |
47 | 30 | Grace Vesom | # Win: $1000; Submit code: $3000 |
48 | 30 | Grace Vesom | |
49 | 30 | Grace Vesom | |
50 | 30 | Grace Vesom | In this contest, if someone does not submit the code, the unclaimed money may be reallocated to those who do at the sole discretion of the prize committee. |
51 | 30 | Grace Vesom | |
52 | 30 | Grace Vesom | |
53 | 30 | Grace Vesom | h2. Proposers |
54 | 30 | Grace Vesom | |
55 | 32 | Grace Vesom | * *Dr. Gary Rost Bradski,* Chief Scientist, Computer Vision and AI at Magic Leap, Inc. |
56 | 30 | Grace Vesom | ** [email protected] |
57 | 30 | Grace Vesom | * *Vadim Pisarevsky*, Principal Engineer at Itseez |
58 | 30 | Grace Vesom | ** [email protected] |
59 | 30 | Grace Vesom | * *Vincent Rabaud*, Perception Team Manager at Aldebaran Robotics |
60 | 30 | Grace Vesom | ** [email protected] |
61 | 30 | Grace Vesom | * *Grace Vesom*, 3D Vision Senior Engineer at Magic Leap, Inc. |
62 | 30 | Grace Vesom | ** [email protected] |
63 | 30 | Grace Vesom | |
64 | 30 | Grace Vesom | h2. Presenters |
65 | 30 | Grace Vesom | |
66 | 30 | Grace Vesom | *Dr. Gary Rost Bradski* is Chief Scientist of Computer Vision at Magic Leap. Gary founded OpenCV at Intel Research in 2000 and is currently CEO of nonprofit OpenCV.org. He ran the vision team for Stanley, the autonomous vehicle that completed and won the $2M DARPA Grand Challenge robot race across the desert. Dr. Bradski helped start up NeuroScan (sold to Marmon), Video Surf (sold to Microsoft), and Willow Garage (absorbed into Suitable Tech). In 2012, he founded Industrial Perception (sold to Google, August 2013). Gary has more than 100 publications and more than 30 patents and is co-author of a bestseller in its category Learning OpenCV: Computer Vision with the OpenCV Library, O'Reilly Press. |
67 | 30 | Grace Vesom | |
68 | 30 | Grace Vesom | *Vadim Pisarevsky* is the chief architect of OpenCV. He graduated from NNSU Cybernetics Department in 1998 with a Masterâs degree in Applied Math. Afterwards, Vadim worked as software engineer and the team leader of OpenCV project at Intel Corp in 2000-2008. Since May 2008 he is an employee of Itseez corp and now works full time on OpenCV under a Magic Leap contract. |
69 | 30 | Grace Vesom | |
70 | 30 | Grace Vesom | *Vincent Rabaud* is the perception team manager at Aldebaran Robotics. He co-founded the non-profit OpenCV.org with Gary Bradski in 2012 while a research engineer at Willow Garage. His research interests include 3D processing, object recognition and anything that involves underusing CPUs by feeding them fast algorithms. Dr. Rabaud completed his PhD at UCSD, advised by Serge Belongie. He also holds a MS in space mechanics and space imagery from SUPAERO and a MS in optimization from the Ecole Polytechnique. |
71 | 30 | Grace Vesom | |
72 | 30 | Grace Vesom | *Grace Vesom* is a senior engineer in 3D vision at Magic Leap and Director of Development for the OpenCV Foundation. Previously, she was a research scientist at Lawrence Livermore National Laboratory working on global security applications and completed her DPhil at the University of Oxford in 2010. |
73 | 30 | Grace Vesom | |
74 | 30 | Grace Vesom | |
75 | 39 | Grace Vesom | h1. (1) Details of the People's Choice Winning Paper -- vote using the CVPR app |
76 | 35 | Grace Vesom | |
77 | 39 | Grace Vesom | You may vote for as many papers as you want. This is the people's choice of the algorithms they'd most like to have. We will award the winners and encourage them to submit their working code to "opencv_contrib":https://github.com/Itseez/opencv_contrib |
78 | 1 | ||
79 | 39 | Grace Vesom | h2. Voting instructions: |
80 | 1 | ||
81 | 39 | Grace Vesom | On your phone, |
82 | 39 | Grace Vesom | * search for the official CVPR app, it's name is CVF (Computer Vision Foundation). Down load it. |
83 | 39 | Grace Vesom | * on your computer, go to the online app "http://www.cvfapp.org":http://www.cvfapp.org/ |
84 | 39 | Grace Vesom | |
85 | 39 | Grace Vesom | Open the app, you'll have to log in the first time. |
86 | 39 | Grace Vesom | |
87 | 39 | Grace Vesom | On the homescreen: |
88 | 39 | Grace Vesom | |
89 | 40 | Grace Vesom | !App11.png! |
90 | 39 | Grace Vesom | |
91 | 42 | Grace Vesom | Click on the the calendar view (lower left blue icon), then scroll down to the day your are on: |
92 | 41 | Grace Vesom | |
93 | 41 | Grace Vesom | !Day22.png! |
94 | 41 | Grace Vesom | |
95 | 41 | Grace Vesom | Find the talk, paper, demo that you like: |
96 | 41 | Grace Vesom | |
97 | 41 | Grace Vesom | !Chose33.png! |
98 | 41 | Grace Vesom | |
99 | 41 | Grace Vesom | Click on the PeoplesChoiceVote square: |
100 | 41 | Grace Vesom | |
101 | 41 | Grace Vesom | !Vote44.png! |
102 | 41 | Grace Vesom | |
103 | 43 | Grace Vesom | We will record this vote. Vote for the papers you'd really love/need to see implementation. We will award extra money to encourage people to submit working code for these popular algorithms! |
104 | 39 | Grace Vesom | |
105 | 39 | Grace Vesom | h1. (2) Details of the State of the Art Vision Challenge: |
106 | 1 | ||
107 | 29 | Grace Vesom | OpenCV is launching a community-wide challenge to update and extend the OpenCV library with state of the art algorithms. An award pool of $20,000 will be provided to the best performing algorithms in the following 11 CV application areas: |
108 | 1 | ||
109 | 14 | Grace Vesom | # image segmentation |
110 | 14 | Grace Vesom | # image registration |
111 | 14 | Grace Vesom | # human pose estimation |
112 | 14 | Grace Vesom | # SLAM |
113 | 14 | Grace Vesom | # multi-view stereo matching |
114 | 14 | Grace Vesom | # object recognition |
115 | 14 | Grace Vesom | # face recognition |
116 | 14 | Grace Vesom | # gesture recognition |
117 | 14 | Grace Vesom | # action recognition |
118 | 14 | Grace Vesom | # text recognition |
119 | 14 | Grace Vesom | # tracking |
120 | 8 | Grace Vesom | |
121 | 22 | Grace Vesom | We prepared code to read from existing data sets in each of these areas: "modules/datasets":http://docs.opencv.org/master/modules/datasets/doc/datasets.html |
122 | 10 | Grace Vesom | |
123 | 1 | h2. Conditions: |
|
124 | 6 | Grace Vesom | |
125 | 1 | The OpenCV Vision Challenge Committee will judge up to five best entries. |
|
126 | 1 | ||
127 | 14 | Grace Vesom | # You may submit a new algorithm developed by yourself. |
128 | 14 | Grace Vesom | # You may submit an existing algorithm *whether or not developed by yourself* (as long as you own or re-implement it yourself). |
129 | 14 | Grace Vesom | # Up to 5 winning algorithms will receive $1000 each. |
130 | 44 | Grace Vesom | # For an additional $3000 to $15,000*, you must submit your winning code as an OpenCV pull request under a BSD or compatible license by December 1st. |
131 | 16 | Grace Vesom | ** You acknowledge that your code may be included, with citation, in OpenCV. |
132 | 6 | Grace Vesom | |
133 | 6 | Grace Vesom | You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR. |
134 | 6 | Grace Vesom | |
135 | 1 | Winners and prizes are at the sole discretion of the committee. |
|
136 | 14 | Grace Vesom | |
137 | 22 | Grace Vesom | *List of selected datasets and other details described here:* "OpenCV Vision Challenge":http://code.opencv.org/attachments/1672/OpenCVVisionChallenge.pdf |
138 | 14 | Grace Vesom | |
139 | 29 | Grace Vesom | ??* We will have a professional programmer assist people with their pull requests. The final amount will be adusted by number of pull requests. The minimum will be $3000 additional dollars for a pull request. The prize committee may adjust the amounts upwards depending on remaining budget at the commitees sole discretion?? |
140 | 11 | Grace Vesom | |
141 | 6 | Grace Vesom | h2. Timeline: |
142 | 1 | ||
143 | 1 | *Submission Period:* |
|
144 | 15 | Grace Vesom | _Now - May 15th 2015_ |
145 | 1 | ||
146 | 1 | *Winners Announcement:* |
|
147 | 1 | _June 8th 2015 at CVPR 2015_ |
|
148 | 44 | Grace Vesom | |
149 | 44 | Grace Vesom | *Submit pull request:* |
150 | 44 | Grace Vesom | _December 1st, 2015_ |
151 | 1 | ||
152 | 10 | Grace Vesom | h2. Contact: |
153 | 1 | ||
154 | 1 | [email protected] |
|
155 | 20 | Grace Vesom | |
156 | 20 | Grace Vesom | h2. Q&A: |
157 | 20 | Grace Vesom | |
158 | 23 | Grace Vesom | *Q.:* _What should be in performance evaluation report? Shall we send any report or paper along with the code?_ |
159 | 23 | Grace Vesom | *A.:* Participants are required to send source code and a performance evaluation report of their algorithms. Report should be in the standard form of a paper with algorithm description. Evaluation should be performed on at least one of the chosen benchmark datasets associated with the building block. Evaluation methodology should be the same as specified by author of each dataset, this includes using the same train\validation\test splits, evaluation metrics, etc. In additional, we ask to report running time of algorithm and platform details to help with their comparison. Algorithm's accuracy should be compared with state-of-the-art algorithms. In addition, itâll be useful to compare it with algorithms implemented in OpenCV whenever possible. Source code and supplied documentation should contain clear description on how to reproduce evaluation results. Source code have to be compiled and run under Ubuntu 14. |
160 | 23 | Grace Vesom | |
161 | 20 | Grace Vesom | *Q.:* _Can I participate in this Vision Challenge by addressing building blocks different from the current 11 categories?_ |
162 | 20 | Grace Vesom | *A.:* For this Vision Challenge, we have selected 11 categories and 21 supporting datasets. To participate in the Vision Challenge you need to address at least one of the building blocks we have selected and get results in at least one of the chosen associated datasets. Results on additional datasets (e.g., depth channel) will be evaluated accordingly by the awarding committee. |
163 | 20 | Grace Vesom | This may be just the first one of a series of challenges and we want to hear from the vision community which building blocks should come next, for the possible next challenges. Please, send your suggestions to our e-mail: [email protected]. |
164 | 20 | Grace Vesom | |
165 | 20 | Grace Vesom | *Current propositions list:* |
166 | 21 | Grace Vesom | * Background Subtraction - 1 vote |
167 | 21 | Grace Vesom | * Point Cloud Registration - 1 vote |
168 | 26 | Grace Vesom | * Pedestrian Detection - 1 vote |
169 | 27 | Grace Vesom | * Text Recognition for Arabic language - 1 vote |
170 | 20 | Grace Vesom | |
171 | 20 | Grace Vesom | *Q.:* _Which external algorithms or libraries can we use?_ |
172 | 20 | Grace Vesom | *A.:* All used 3rd party code should have Permissive free software licence. The most popular such licenses are: BSD, Apache 2.0, MIT. |
173 | 20 | Grace Vesom | |
174 | 20 | Grace Vesom | *Q.:* _I don't find the tracking dataset loading in opencv_contrib/modules/datasets module._ |
175 | 20 | Grace Vesom | *A.:* We are not implemented loading-evaluation code for VOT tracking dataset, because it already has its own "toolkit":http://www.votchallenge.net/vot2014/participation.html. |
176 | 1 | ||
177 | 24 | Grace Vesom | *Q.:* _Where I can find the Dataset Benchmarks?_ |
178 | 25 | Grace Vesom | *A.:* They are placed with samples in "modules/datasets/samples":https://github.com/Itseez/opencv_contrib/tree/master/modules/datasets/samples. |
179 | 24 | Grace Vesom | |
180 | 1 | h3. Back to Developer page: |
|
181 | 1 | ||
182 | 1 | "OpenCV":http://code.opencv.org/projects/opencv/wiki |