Publications
Authors:
Quentin Grail, Julien Perez, Tomi Silander
Citation:
Revue TAL, 59-2
Abstract:
Machine reading has recently progressed remarkably with a help of differentiable
reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have
demonstrated promising performance on simple natural language based reasoning tasks such
as factual reasoning and basic deduction. However, the training of machine comprehension
models commonly requires an annotated question-answer dataset for supervised learning. In
this paper we explore adversarial learning and self-play for developing machine reading com-
prehension. Inspired by the success in the domain of game learning, we present a novel ap-
proach to train machine comprehension models based on a coupled attention-based memory
model. In our approach, a reader network is in charge of finding answers to the questions re-
garding a passage of text, while a narrator network tries to obfuscate spans of text in order
to minimize the probability of success of the reader. We tested the model on several question-
answering corpora. The proposed learning paradigm and associated models show promising
results.
Year:
2018
Report number:
2018/248