A Tensor-Based Distributional Model of Semantics
Remember "The Hitchhiker's Guide to the Galaxy", where a supercomputer with a profound name "Deep Thought" was built to calculate "the Answer to Life, the Universe, and Everything". It took the computer 7,5 million years to respond with just one word - "42". Seeing the human's dissatisfaction with the answer, the computer wondered suddenly, what the question was actually about. Nowadays, the computers need only fractions of seconds to respond to our inquiries, and we turn to search engines to find out any kind of information ranging from business to private questions, be it the newest events in politics, the opinions on the last election campaign, the weather tomorrow or even how to dress for the New Year's party.
The speed of computers has increased immensely, but the quality of human-computer interaction in natural language still leaves much to be desired. One of the most prominent problems on the way for the computer to pass the Turing Test is, how English, German or any other language in the world can be represented in such a way, so that the computer can interpret it, and the semantics of corresponding languages is correctly reflected in such a representation.
Vector space models (VSM) of meaning are arguably one of the most successful paradigms in computational semantics. VSM embody distributional hypothesis of meaning that claims that a word is known "by the company it keeps" (Firth, 1957).
These models have proven useful and adequate in a variety of natural language processing tasks. However, most of them lack at least one of the following two key requirements in order to serve as an adequate model of natural language semantics: (1) sensitivity to structural information such as word order and (2) linguistically justified operations for semantic composition.
In this talk, I will introduce a novel approach featuring both aspects by employing a tensor model that accounts for order-dependent word contexts and assigns to words characteristic matrices such that semantic composition can be realized in a natural way via matrix multiplication. I will discuss the structural and cognitive plausibility of the suggested model. Finally, I will present first experimental validations of the approach as well as discuss directions for future work.
Start: 01. Februar 2012 um 14:00
Ende: 01. Februar 2012 um 15:00
Im Gebäude 11.40, Raum: 253
Veranstaltung vormerken: (iCal)