• Gabriela Csurka,Boris Chidlovskii,Stéphane Clinchant,Sofia Michel
ECCV, Amsterdam, the Netherlands, October 8-16, 2016.
We propose to extend the marginalized denoising autoencoder (MDA)
framework with a domain regularization whose aim is to denoise both the source
and target data in such a way that the features become domain invariant and the
adaptation gets easier. The domain regularization, based either on the maximum
mean discrepancy (MMD) measure or on the domain prediction, aims to reduce
the distance between the source and the target data. We also exploit the source
class labels as another way to regularize the loss, by using a domain classifier regularizer. We show that in these cases, the noise marginalization gets reduced to solving either the linear matrix system AX = B, for which there exists a closed-form
solution, or to a Sylvester linear matrix equation AX + XB = C that can
be solved efficiently using the Bartels-Stewart algorithm. We did an extensive
study on how these regularization terms improve the baseline performance and
we present experiments on three image benchmark datasets, conventionally used
for domain adaptation methods. We report our findings and comparisons with state-of-the-art methods.
Report number: