Modelling Structure in Language
Computational models of language built upon recent advances in Artificial Intelligence are able to produce remarkably accurate predictive distributions when trained on large text corpora. However, there is significant evidence that such models are not discovering and using the latent syntactic and semantic structure inherent in language. In the first part of this talk I will discuss recent recent work at DeepMind and Oxford University aimed at understanding to what extent current deep learning models are learning structure, and whether models equipped with a preference for memorisation or hierarchical composition are better able to discover lexical and syntactic units. In the second part of this talk I will describe initial work at DeepMind to train agents in simulated 3D worlds to ground simple linguistic expressions.
Phil Blunsom is a Research Scientist at DeepMind, where he leads the Natural Language Team, and an Associate Professor in the Department of Computer Science at Oxford University. His research groups work at the intersection of machine learning and computational linguistics in pursuit of algorithms capable of understanding and exploiting natural language.