News Release

Researchers increase understanding of coarse-to-fine human visual perception

Peer-Reviewed Publication

Chinese Academy of Sciences Headquarters

How the Primate Brain Preserves Visual Acuity and Processes Local-Global Features

image: An abstract artistic creation depicting a central question in vision on how the primate brain preserves visual acuity and processes local-global features along the object-processing hierarchy. view more 

Credit: Designed and photoed by Drs. WANG Wei and LU Yiliang

A long-standing paradox in vision shows that the complexity of neural encoding increases along the visual hierarchy even as visual resolution dramatically decreases. Put differently, how do people simultaneously recognize the face of a child, while at the same time visually resolving individual eyelash hairs?

The idea that sensory transformation discards low-level detail to yield invariant classification is a central core of many models of brain function. Some complex models then invoke large re-entrant loops to solve this fundamental paradox of increasing complexity vis-à-vis decreasing selectivity.

Primates can identify objects in the 10° central visual field within 150 ms, suggesting an initial fast cascade of largely feedforward processing. How they can effortlessly perceive both global and local features of objects in such a short time and in great detail remains a mystery. The main question in vision study is how the cortex integrates local visual cues to form global representations along the object-processing visual hierarchy.

In a recent study published in Neuron, Dr. WANG Wei's lab at the Institute of Neuroscience of the Chinese Academy of Sciences revealed an unexpected neural clustering preserving visual acuity from V1 into V4, enabling the spatiotemporal separation of processing local and global features along the hierarchy.

The work from Dr. WANG's group aims at evaluating the core concept of whether low-level information like spatial resolution is preserved along the visual hierarchy, and if so, what are its functional implications. This is fundamental to understanding how the brain does sensory transformation.

Dr. WANG's lab studied the simultaneous transformation of spatial resolution (i.e., visual acuity) across macaque parafoveal V1, V2, and V4. Spatial resolution is often measured as spatial frequency (SF) discrimination. The researchers particularly focused on spatial analysis in V4, which links the analysis of local features by V1 and V2 with the global object representation provided by IT.

Surprisingly, they discovered clustered "islands" of V4 neurons selective for high SFs up to 12 cycles/°, far exceeding the average optimal SFs of V1 and V2 neurons at similar retinal eccentricities. These neural clusters violate the inverse relationship between visual acuity and retinal eccentricity.

They proceeded to show that higher-acuity clusters represent local features, whereas lower-acuity clusters represent global features of the same stimuli. Furthermore, the clustered neurons with high-SF selectivity were found to respond 10 ms later than those in low-SF domains, providing direct neural evidence for the coarse-to-fine nature of human perception at intermediate levels of the visual processing hierarchy.

The study demonstrated that neurons in V4 (and most likely also in IT) do not necessarily need to have only low visual resolution. The research will prompt further studies to probe how this preservation of low-level information is useful for higher-level vision.

The study for the first time showed an unexpected compartmentation of area V4 into SF-selective functional domains that extend to high visual acuity. Higher acuities are preserved to later stages of the visual hierarchy where more complex visual cognitive behavior occurs, and may begin to resolve the long-standing paradox concerning fine visual discrimination in visual perception.

Data provided by Dr. WANG's lab has informed a conceptual reevaluation of processing models that currently dominate system neuroscience and artificial neural networks such as Deep Neural Networks (DNN).

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.