David Novotny, Diane Larlus, Andrea Vedaldi
BMVC, York, UK, 19-22 September, 2016.
The recent successes of deep learning have been possible due to the availability of
increasingly large quantities of annotated data. A natural question, however, is whether
further progress can be indefinitely sustained by annotating more data, or whether there
is a saturation point beyond which a problem is essentially solved, or the capacity of a
model is saturated. In this paper we examine this question from the viewpoint of learning
shareable semantic parts, a fundamental building block to generalize visual knowledge
between object categories. We ask two research questions often neglected: whether semantic
parts are also visually shareable between classes, and how many annotations are
required to learn them. In order to answer such questions, we collect 15,000 images of
100 animal classes and annotate them with parts. We then thoroughly test active learning
and domain adaptation techniques to generalize to unseen classes parts that are learned
from a limited number of classes and example images. Our experiments show that, for
a majority of the classes, part annotations transfer well, and that performance reaches
98% of the accuracy of the fully annotated scenario by providing only a few thousand examples.
Report number: