Cornell Cognitive Studies Symposium
Statistical Learning across Cognition
Some Cross-modal Interactions Between Language and Vision
Previous evidence from eye-tracking has demonstrated that visual contexts associated with lexical items can immediately influence spoken word recognition. This influence of vision on language has been shown to be modulated by lexical frequency in artificial language tasks, and to be largely insensitive to separate language modes in bilinguals. In recent studies, this cross-modal interaction between language and vision appears to work in the other direction as well. With a modified visual search task, we report evidence that the incremental nature of linguistic input can essentially convert a serial conjunction search into something approximating a pair of nested parallel feature searches. I will suggest that the incrementality of language comprehension allows its intermediate representations to be modulated by visual input, and the automaticity of language allows those intermediate representations to influence real-time visual feature extraction.
Back to main page