The sources that guide attention are manifold and interact in complex ways. Internal goals, task rules, or salient external stimuli have shown to be some of the strongholds of attentional control. But what guides attention in complex, real-world environments?
Following Wertheimer’s Gestalt ideas, I will argue that a scene is more than the sum of its objects. That is, attention during scene viewing is mainly controlled by generic knowledge regarding the meaningful composition of objects that make up a scene. Contrary to arbitrary target objects placed in random arrays of distractors, objects in naturalistic scenes are placed in a very rule-governed manner. Thus, scene priors — i.e. expectations regarding what objects (scene semantics) are supposed to be where (scene syntax) within a scene — strongly guide attention. Violating such semantic and syntactic scene priors results in differential ERP responses similar to the ones observed in sentence processing and might suggest some commonality in the mechanisms for processing meaning and structure across a wide variety of cognitive tasks.
In this talk, I will highlight some recent projects from my lab in which we have tried to shed more light on the influence of scene grammar on visual search, object perception and memory, its developmental trajectories, as well as its role in the ad-hoc creation of scenes in virtual reality scenarios.