News Release

Study reveals how the brain categorizes thousands of objects and actions

Peer-Reviewed Publication

Cell Press

Graphical Visualization of the Group Semantic Space

image: (A) Coefficients of all 1,705 categories in the first group PC, organized according to the graphical structure of WordNet. Links indicate "is a" relationships (e.g., an athlete is a person); some relationships used in the model have been omitted for clarity. Each marker represents a single noun (circle) or verb (square). Red markers indicate positive coefficients and blue indicates negative coefficients. The area of each marker indicates the magnitude of the coefficient. This PC distinguishes between categories with high stimulus energy (e.g., moving objects like "person" and "‘vehicle’") and those with low stimulus energy (e.g., stationary objects like "sky" and "city"). (B) The three-dimensional RGB colormap used to visualize PCs 2–4. The category coefficient in the second PC determined the value of the red channel, the third PC determined the green channel, and the fourth PC determined the blue channel. Under this scheme, categories that are represented similarly in the brain are assigned similar colors. Categories with zero coefficients appear neutral gray. (C) Coefficients of all 1,705 categories in group PCs 2–4, organized according to the WordNet graph. The color of each marker is determined by the RGB colormap in (B). Marker sizes reflect the magnitude of the three-dimensional coefficient vector for each category. This graph shows that categories thought to be semantically related (e.g., "athletes" and "walking") are represented similarly in the brain. view more 

Credit: Huth et al., <I>Neuron</I>, Figure 4

Humans perceive numerous categories of objects and actions, but where are these categories represented spatially in the brain? Researchers reporting in the December 20 issue of the Cell Press journal Neuron present their study that undertook the remarkable task of determining how the brain maps over a thousand object and action categories when subjects watched natural movie clips. The results demonstrate that the brain efficiently represents the diversity of categories in a compact space. Instead of having a distinct brain area devoted to each category, as previous work had identified, for some but not all types of stimuli, the researchers uncovered that brain activity is organized by the relationship between categories.

"Humans can recognize thousands of categories. Given the limited size of the human brain, it seems unreasonable to expect that every category is represented in a distinct brain area," says first author Alex Huth, a graduate student working in Dr. Jack Gallant's laboratory at the University of California, Berkeley.

The authors proposed that perhaps a more efficient way for the brain to represent object and action categories would be to organize them into a continuous space that reflects the similarity between categories.

To test this hypothesis, they used blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) to measure human brain activity evoked by natural movies in five people. They then mapped out how 1,705 distinct object and action categories are represented across the surface of the cortex of the brain. Their results show that categories are organized as smooth gradients that cover much of the surface of the visual as well as nonvisual cortex, such that similar categories are located next to each other, and notably, this organization was shared across the individuals imaged.

"Discovering the feature space that the brain uses to represent information helps us to recover functional maps across the cortical surface. The brain probably uses similar mechanisms to map other kinds of information across the cortical surface, so our approach should be widely applicable to other areas of cognitive neuroscience," says Dr. Gallant.

###

Huth et al.: "A continuous semantic space describes the representation of thousands of object and action categories across the human brain."


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.