Speaker: Edouard Oyallon, assistant professor at Centrale Supélec, Paris, France


Outstanding supervised classification performances obtained by Convolutional Neural Networks(CNNs) indicate they have the ability to create relevant invariants for classification. We show numerically that this can be achieved through architectures that progressively incorporate invariances, and that such invariances can still preserve most of the signal attributes. On the other hand, we build perfectly invertible CNNs architectures: it shows there is no need to build representations that discard information, in order to obtain good performances on ImageNet. Illustrations are given through Hybrid Scattering Networks [1], based on a geometric representation, and $i$-RevNets [2], a class of invertible CNNs. We explicit several empirical properties, like progressive linear separability [2,3], in order to shed light on the inner mechanisms implemented by CNNs. Bibliography:

  • [1] E. Oyallon, E. Belilovsky, S. Zagoruyko, Scaling the Scattering Transform: Deep Hybrid Networks [download]
  • [2] J.H. Jacobsen, A.W.M. Smeulders, E. Oyallon, i-RevNet: Deep Invertible Networks [download]
  • [3] E. Oyallon, Building a Regular Decision Boundary with Deep Networks [download]