Cornell Cognitive Studies Symposium

Statistical Learning across Cognition

Transforming Real-world Language into Semantic Representations

Curt Burgess
University of California, Riverside
curt@citrus.ucr.edu

 

Global co-occurrence memory models, such as the Hyperspace Analogue to Language (or HAL), encode the symbols (words) in language input. In HAL, statistical learning involves the encoding of weighted word co-occurrences in a moving n-width window into a memory matrix. A broader range of co-occurrence information is captured with this methodology than with local co-occurrence approaches. These weighted co-occurrence patterns form the basis of high-dimensional word meaning vectors which have the characteristics of distributed representations: graceful degradation, concepts formed by a large array of elements, and straightforward generalization. An advantage of these models is that they use learning procedures that scale up to real world language problems. The HAL model has been used to investigate a wide range of cognitive phenomena (associative and semantic priming, semantic and grammatical categorization, connotative definitions, semantic judgments, parsing constraints, deep dyslexia, cerebral asymmetries, concept acquisition, aspects of aging and development, and decision making) -- some of these results will be presented. Recent work will be presented showing that the output of a global co-occurrence learning algorithm produces virtually the same output as a simple recurrent network (SRN) and that relatively few trials are required for learning. Co-occurrence models have become well known for using a standard Minkowski distance metric. Part of this presentation will entail a discussion of a range of memory metrics that have proven useful in theorizing about memory function and may be candidates for a more complete model of memory.

 

Back to main page