"When we hear things, we naturally process them in a series," says Daphne Bavelier, associate professor of brain and cognitive sciences at the University of Rochester. "When we hear music, for instance, it comes to us second by second, so the part of our brains that processes auditory information has evolved to absorb information in sequence. This means hearing a spoken list, such as numbers in an ATM code, corresponds more closely with what the auditory brain does naturally." Conversely, visual information comes to us simultaneously as we might see a sunset, clouds and a skyline all at the same time. While the visual processes in the brain can still remember ordered lists, they tend to be less effective at it, recalling an average of five numbers instead of seven.
In the 1960s, cognitive scientists showed that for nearly all speakers of all languages, list retention peaked at around seven items, plus or minus about two. As more languages were tested, a few exceptions were found, such as Chinese, that allows to hold nine items or Welsh that is nearer to five, but in all cases these variations were entirely predictable by the length of time it takes to utter the words in each language. The Chinese numbers used in the test happen to be very short and simple to pronounce, whereas Welsh ones are quite complex and take longer. In this context, deaf users of American Sign Language who had been known to recall only about five items were thought to do so because signs take longer to utter.
Bavelier's graduate student, Mrim Boutla, was investigating visual memory and wanted to know more about American Sign Language and decided to test this view. The team devised studies to put sign language tests on equal footing with hearing-designed tests. To their surprise they found that even when signs were faster to pronounce than spoken language, signers recalled only five items. Even more surprising, when the team tested hearing individuals who were fluent in American Sign Language, such as people who had grown up with deaf siblings or parents, they found that the same people scored differently when asked to recall spoken lists in order, versus when they recalled signed lists. The discrepancy broke down as expected: seven heard items remembered, five signed items remembered. It was obvious that the regular ordered-item tests were not accurately evaluating the cognitive ability of deaf individuals in relation to those who could hear.
Up until this time, the predominant idea was that the magic number of seven was a good measure of overall cognitive capacity, likely utilizing the centers of the brain for memory and language. No one thought that perhaps a test for one kind of language might not work well for another language like sign language--researchers had always assumed the tests were evaluating the same cognitive aspects of the brain, whether spoken or signed.
In a direct evaluation of the memory test itself, Bavelier designed an experiment that would test more directly the memory centers of the brain for language, without favoring auditory or visual processing. Instead of asking her subjects to recall the order of a list, a task at which the auditory brain is superior, Bavelier concentrated on devising a test that required recall, but not in the temporal order of the items. Both hearing and deaf subjects were given a list of words like "boat" and "table" and asked to recall those words in a well-formed sentence, such as "The boat is on the water. The table is square." The order doesn't matter. When people tried this test, both speakers and signers performed equally well, showing that such a test is likely a much better evaluator of cognitive ability than the old ordered-item test.
"It's a better test because it does not require temporal or spatial information to be maintained, but requires people to manipulate language information on the fly, which is really the hallmark of what language use is about," says Bavelier. "Unfortunately, right now the old ordered-list test is still the test of choice in most educational and clinical settings."
Co-authors of the research include Elissa Newport, chair of the Department of Brain and Cognitive Sciences at the University of Rochester, and Ted Supalla, associate professor of brain and cognitive sciences. This research was funded by the National Institutes of Health and the James S. McDonnell Foundation.