Publications
Authors:
James Henderson
Abstract:
Modelling entailment is a fundamental issue in computational semantics. This paper proposes distributional semantic models which efficiently learn word embeddings for entailment, using a recently-proposed framework for modelling entailment in a vector-space. These models postulate a latent vector which is the consistent unification of two neighbouring word vectors, thereby modelling both the semantic consistency and semantic redundancy between neighbouring words. We investigate whether it is better to model words in terms of the evidence they contribute about this latent vector, or in terms of the posterior distribution of such a latent vector, and find that the posterior vectors perform better. The resulting word embeddings outperform the best previous results on predicting hyponymy between words, in unsupervised and semi-supervised experiments.
Year:
2017
Report number:
2017/200