What do you know about an alligator when you know the company it keeps?

Main Article Content

Katrin Erk

Abstract

Distributional models describe the meaning of a word in terms of its observed contexts. They have been very successful in computational linguistics. They have also been suggested as a model for how humans acquire (partial) knowledge about word meanings. But that raises the question of what, exactly, distributional models can learn, and the question of how distributional information would interact with everything else that an agent knows.

For the first question, I build on recent work that indicates that distributional models can in fact distinguish to some extent between semantic relations, and argue that (the right kind of) distributional similarity indicates property overlap. For the second question, I suggest that if an agent does not know what an alligator is but knows that alligator is similar to crocodile, the agent can probabilistically infer properties of alligators from known properties of crocodiles. Distributional evidence is noisy and partial, so I adopt a probabilistic account of semantic knowledge that can learn from such data.

BibTeX info

Article Details

Section
Main Articles