Blog

ICCV 2017

NAVER LABS Europe at the International Conference on Computer Vision in Venice (ICCV2017) with invited talks, conference papers and workshop contributions. 

Experts in computer vision around the world gathered in Venice last week for the International Conference on Computer Vision, a biannual premier event in the field lasting over a week and packed with papers, posters and workshops. NAVER LABS Europe was proud to have several papers, posters and invited talks during the week. Below is a list of our presentations and other contributions to the event with the odd photo!

Main Conference papers

Joint learning of object and action detectorsVicky Kalogeiton (INRIA, Univ. Edinburgh), Philippe Weinzaepfel (NAVER LABS Europe), Cordelia Schmid (INRIA) and Vittorio Ferrari (Univ. of Edinburgh).  Full paper- PDF

Abstract: While most existing approaches for de tection in videos focus on objects or human actions separately, we aim at jointly detecting objects performing actions, such as cat eating or dog jumping. We introduce an end-to-end multitask objective that jointly learns object-action relationships. We compare it with different training objectives, validate its effectiveness for detecting objects-actions in videos, and show that both tasks of object and action detection bene- fit from this joint learning. Moreover, the proposed architecture can be used for zero-shot learning of actions: our multitask objective leverages the commonalities of an action performed by different objects, e.g. dog and cat jumping, enabling to detect actions of an object without training with these object-actions pairs. In experiments on the A2D dataset, we obtain state-of-the-art results on segmentation of object-action pairs. We finally apply our multitask architecture to detect visual relationships between objects in images of the VRD dataset

Action Tubelet Detector for Spatio-Temporal Action LocalizationVicky Kalogeiton (INRIA and Univ. of Edinburgh), Philippe Weinzaepfel (NAVER LABS Europe), Cordelia Schmid (INRIA) and Vittorio Ferrari (Univ. of Edinburgh). Full paper PDF

Abstract: Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework [18]. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB [12] and UCF-101 [30] datasets, in particular at high overlap thresholds.

David Novotny presenting

Learning 3D Object Categories by Looking Around Them, David Novotny (Univ. of Oxford and NAVER LABS Europe), Diane Larlus and Andrea Vedaldi (Univ. of Oxford). Oral. (Full paper PDF)

Abstract: Traditional approaches for learning 3D object categories use either synthetic data or manual supervision. In this paper, we propose a method which does not require manual annotations and is instead cued by observing objects from a moving vantage point. Our system builds on two innovations: a Siamese viewpoint factorization network that robustly aligns different videos together without explicitly comparing 3D shapes; and a 3D shape completion network that can extract the full shape of an object from partial observations. We also demonstrate the benefits of configuring networks to perform probabilistic predictions as well as of geometry-aware data augmentation schemes. We obtain state-of-the-art results on publicly-available benchmarks.

Invited workshop talks:

 

Workshop spotlight presentation

Discrepancy-based networks for unsupervised domain adaptation: a comparative studyGabriela Csurka, Fabien Baradel (INSA-LIRIS), Boris Chidlovskii and Stephane Clinchant, TASK-CV, Full paper - PDF 

VisDa2017 2nd: NAVER LABS Europe

Maxime Bucher - Best Paper Award

NAVER LABS Europe was 2nd in the Visual Domain Adaptation (VisDA2017) Classification Challenge  of TASK-CV. Boris Chidlovskii presented our methods at the workshop.

NAVER LABS sponsored the best paper at the TASK-CV Workshop (Transferring and Adapting Source Knowledge in Computer Vision).

Congratulations go to Maxime Bucher, Stephane Herbin and Frederic Jurie for their paper Generating Visual Representations for Zero-Shot Classification

Finally, Florent Perronnin who heads up the NAVER LABS Europe scientific team was industry co-chair of ICCV2017.