The VOC challenge 2005 has now ended. The development kit which was provided to participants is available for download. This includes code for:
The two datasets provided for the challenge have been added to the main PASCAL image databases page. To run the challenge code you will need to download these two databases:
Results of the challenge were presented at the PASCAL Challenges workshop in April 2005, Southampton, UK. A chapter reporting results of the challenge will appear in Lecture Notes in Artificial Intelligence:
An earlier report which includes some more detailed results is also available for download, and Powerpoint slides of the challenge workshop presentation:
The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images will be provided. The four object classes that have been selected are:
There will be two main competitions:
Contestants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the four object classes. The challenge allows for two approaches to each of the competitions:
The intention in the first case is to establish just what level of success can currently be achieved on these problems and by what method; in the second case the intention is to establish which method is most successful given a specified training set.
The training data provided will consist of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the four classes present in the image. Note that multiple objects from multiple classes may be present in the same image.
The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission.
In the second stage, two test sets will be made available for the actual competition:
Contestants are free to submit results for any (or all) of the test sets provided.
Contestants may run several experiments on each competition of the challenge, for example using alternative methods or different training data. Contestants must assess their results using the software provided. This software writes standardized output files recording classifier output, ROC and precision/recall curves. For submission, contestants must prepare:
To submit your results, please prepare a single archive file (gzipped tar/zip) and place it on a publicly accessible web/ftp server. The contents should be as listed above. See the development kit documentation for information on the VOCroc and VOCpr functions needed to generate output files, and the location of these files. When you have prepared your archive and checked that it is accessible, send an email with the URL and any necessary explanatory notes to Mark Everingham.
Participants who cannot place their results on the web/ftp may instead send them by email as an attachment. Please include details of the attachment in the email body. Please do not send large files (>200KB) in this way.
To aid administration of the challenge, entrants will be required to register when downloading the test set.