6 December 2018

GETALP (Groupe d'Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole)

Marc DymetmanPrior knowledge and deep learning: some principles and applications to NLP 

Abstract: In the last few years, neural networks have quickly gained a dominant position in computational linguistics. In application domains where supervised data is abundant, such as Machine Translation between some of the major world languages, the superior learning capabilities of neural networks have produced models with better performance than the previously available techniques. In such abundant data conditions, these models can be trained from raw data, in an end-to-end fashion, without many injections of external knowledge. However, in less favorable data conditions, prior knowledge continues to play an important role: it allows the neural components to be guided, not only by direct data observations, but also by hypotheses and principles that come from an understanding of the problem at hand. In my talk, I will try to provide some intuitions about the role of prior knowledge in deep learning for NLP and provide some examples from my own experience with applications such as Language Modelling, NLG, and Semantic Parsing.