Introduction   History   Data   Tasks   Timetable   Organizers   Sponsors   Contact  

News

Introduction

This challenge evaluates algorithms for object detection and image classification at large scale. This year there will be three competitions:
  1. A PASCAL-style detection challenge on fully labeled data for 200 categories of objects,NEW
  2. An image classification challenge with 1000 categories, and
  3. An image classification plus object localization challenge with 1000 categories.
One high level motivation is to allow researchers to compare progress in detection across a wider variety of objects -- taking advantage of the quite expensive labeling effort. Another motivation is to measure the progress of computer vision for large scale image indexing for retrieval and annotation.

History

Data

Dataset 1: Detection

This year there is a new object detection task similar in style to PASCAL VOC Challenge. There are 200 basic-level categories for this task which are fully annotated on the test data, i.e. bounding boxes for all categories in the image have been labeled. The categories were carefully chosen considering different factors such as object scale, level of image clutterness, average number of object instance, and several others. Some of the test images will contain none of the 200 categories.

Comparative scale

PASCAL VOC 2012 ILSVRC 2013
Number of object classes 20 200
Training Num images 5717 395909
Num objects 13609 345854
Validation Num images 5823 20121
Num objects 13841 55502
Testing Num images 10991 40152
Num objects --- ---

Comparative statistics (on validation set)

PASCAL VOC 2012 ILSVRC 2013
Average image resolution 469x387 pixels 482x415 pixels
Average object classes per image 1.521 1.534
Average object instances per image 2.711 2.758
Average object scale
(bounding box area as fraction of image area)
0.207 0.170*
* has been corrected from previous version

Example ILSVRC2013 images:

Note: people detection on ILSVRC2013 may be of particular interest. There are 12125 images for training (9877 of them contain people, for a total of 17728 instances), 20121 images for validation (5756 of them contain people, for a total of 12823 instances) and 40152 images for testing. There is significant variability in pose and appearance, in part due to interaction with a variety of objects. In the validation set, people appear in the same image with 196 of the other labeled object categories.

Dataset 2: Classification and classification with localization

The data for the classification and classification with localization tasks will remain unchanged from ILSVRC 2012 . The validation and test data will consist of 150,000 photographs, collected from flickr and other search engines, hand labeled with the presence or absence of 1000 object categories. The 1000 object categories contain both internal nodes and leaf nodes of ImageNet, but do not overlap with each other. A random subset of 50,000 of the images with labels will be released as validation data included in the development kit along with a list of the 1000 categories. The remaining images will be used for evaluation and will be released without labels at test time.

The training data, the subset of ImageNet containing the 1000 categories and 1.2 million images, will be packaged for easy downloading. The validation and test data for this competition are not contained in the ImageNet training data.

Tasks

Task 1: Detection NEW

For each image, algorithms will produce a set of annotations (lj,bj,cj) of class labels lj, bounding boxes bj and confidence scores cj. This set is expected to contain each instance of each of the 200 object categories. Objects which were not annotated will be penalized, as will be duplicate detections (two annotations for the same object instance). The winner of the detection challenge will be the team which achieves first place accuracy on the most object categories.

Task 2: Classification

For each image, algorithms will produce a list of at most 5 object categories in the descending order of confidence. The quality of a labeling will be evaluated based on the label that best matches the ground truth label for the image. The idea is to allow an algorithm to identify multiple objects in an image and not be penalized if one of the objects identified was in fact present, but not included in the ground truth. For each image, an algorithm will produce 5 labels lj,j=1,...,5. The ground truth labels for the image are gk,k=1,...,n with n classes of objects labeled. The error of the algorithm for that image would be e=1nkminjd(lj,gk). d(x,y)=0 if x=y and 1 otherwise. The overall error score for an algorithm is the average error over all test images. Note that for this version of the competition, n=1, that is, one ground truth label per image. Also note that for this year we no longer evaluate hierarchical cost as in ILSVRC2010 and ILSVRC2011.

Task 3: Classification with localization

In this task, an algorithm will produce 5 class labels lj,j=1,...,5 and 5 bounding boxes bj,j=1,...5, one for each class label. The ground truth labels for the image are gk,k=1,...,n with n classes labels. For each ground truth class label gk, the ground truth bounding boxes are zkm,m=1,...Mk, where Mk is the number of instances of the kth object in the current image. The error of the algorithm for that image would be
e=1nkminjminMkmmax{d(lj,gk),f(bj,zkm)}
where f(bj,zk)=0 if bj and zmk has over 50% overlap, and f(bj,zmk)=1 otherwise. In other words, the error will be the same as defined in task 1 if the localization is correct(i.e. the predicted bounding box overlaps over 50% with the ground truth bounding box, or in the case of multiple instances of the same class, with any of the ground truth bounding boxes). otherwise the error is 1(maximum).

Development Kit

Timetable

Submission

Please submit your results here, and refer to the FAQ for details.

Organizers

Advisors

Sponsors

Contact

Please feel free to send any questions or comments to ilsvrc2013@image-net.org.