[ Back to EurekAlert! ]

PUBLIC RELEASE DATE:
23-Jul-2012

[ | E-mail ] Share Share

Contact: Zheng Wang
Wang.1243@osu.edu
614-247-8031
Ohio State University
@osuresearch

Study shows why some types of multitasking are more dangerous than others

COLUMBUS, Ohio - In a new study that has implications for distracted drivers, researchers found that people are better at juggling some types of multitasking than they are at others.

Trying to do two visual tasks at once hurt performance in both tasks significantly more than combining a visual and an audio task, the research found.

Alarmingly, though, people who tried to do two visual tasks at the same time rated their performance as better than did those who combined a visual and an audio task - even though their actual performance was worse.

"Many people have this overconfidence in how well they can multitask, and our study shows that this particularly is the case when they combine two visual tasks," said Zheng Wang, lead author of the study and assistant professor of communication at Ohio State University.

"People's perception about how well they're doing doesn't match up with how they actually perform."

Eye-tracking technology used in the study showed that people's gaze moved around much more when they had two visual tasks compared to a visual and an audio task, and they spent much less time fixated on any one task. That suggests distracted visual attention, Wang said.

People in the study who had two visual tasks had to complete a pattern-matching puzzle on a computer screen while giving walking directions to another person using instant messaging (IM) software.

Those who combined a visual and an audio task tried to complete the same pattern-matching task on the screen while giving voice directions using audio chat.

The two multitasking scenarios used in this study can be compared to those drivers may face, Wang said.

People who try to text while they are driving are combining two mostly visual tasks, she said. People who talk on a phone while driving are combining a visual and an audio task.

"They're both dangerous, but as both our behavioral performance data and eyetracking data suggest, texting is more dangerous to do while driving than talking on a phone, which is not a surprise," Wang said.

"But what is surprising is that our results also suggest that people may perceive that texting is not more dangerous - they may think they can do a good job at two visual tasks at one time."

The study appears in a recent issue of the journal Computers in Human Behavior.

The study involved 32 college students who sat at computer screens. All of the students completed a matching task in which they saw two grids on the screen, each with nine cells containing random letters or numbers. They had to determine, as quickly as possible, whether the two grids were a "match" or "mismatch" by clicking a button on the screen. They were told to complete as many trials as possible within two minutes.

After testing the participants on the matching task with no distractions, the researchers had the students repeat the matching task while giving walking directions to a fellow college student, "Jennifer," who they were told needed to get to an important job interview. Participants had to help "Jennifer" get to her interview within six minutes. In fact, "Jennifer" was a trained confederate experimenter. She has been trained to interact with participants in a realistic but scripted way to ensure the direction task was kept as similar as possible across all participants.

Half of the participants used instant messaging software (Google Chat)to type directions while the other half used voice chat (Google Talk with headphones and an attached microphone)to help "Jennifer" reach her destination.

Results showed that multitasking, of any kind, seriously hurt performance.

Participants who gave audio directions showed a 30 percent drop in visual pattern-matching performance. But those who used instant messaging did even worse - they had a 50 percent drop in pattern-matching performance.

In addition, those who gave audio directions completed more steps in the directions task than did those who used IM.

But when participants were asked to rate how well they did on their tasks, those who used IM gave themselves higher ratings than did those who used audio chat.

"It may be that those using IM felt more in control because they could respond when they wanted without being hurried by a voice in their ears," Wang said.

"Also, processing several streams of information in the visual channel may give people the illusion of efficiency. They may perceive visual tasks as relatively effortless, which may explain the tendency to combine tasks like driving and texting."

Eye-tracking results from the study showed that people paid much less attention to the matching task when they were multitasking, Wang said. As expected, the results were worse for those who used IM than for those who used voice chat.

Overall, the percentage of eye fixations on the matching-task grids declined from 76 percent when that was the participants' only task to 33 percent during multitasking.

Fixations on the grid task decreased by 53 percent for those using IM and a comparatively better 35 percent for those who used voice chat.

"When people are using IM, their visual attention is split much more than when they use voice chat," she said.

These results suggest we need to teach media and multitasking literacy to young people before they start driving, Wang said.

"Our results suggest many people may believe they can effectively text and drive at the same time, and we need to make sure young people know that is not true."

In addition, the findings show that technology companies need to be aware of how people respond to multitasking when they are designing products.

For example, these results suggest GPS voice guidance should be preferred over image guidance because people are more effective when they combine visual with aural tasks compared to two visual tasks.

"We need to design media environments that emphasize processing efficiency and activity safety. We can take advantage of the fact that we do better when we can use visual and audio components rather than two visual components," Wang said.

###

The work was supported by a grant from the National Science Foundation.

Co-authors of the study were Prabu David of Washington State University, Jatin Srivastava of Ohio University and Stacie Powers, Christine Brady, Jonathan D'Angelo and Jennifer Moreland, all of Ohio State.

Contact: Zheng Wang, (614) 247-8031; Wang.1243@osu.edu

Written by Jeff Grabmeier, (614) 292-8457; Grabmeier.1@osu.edu



[ Back to EurekAlert! ] [ | E-mail Share Share ]

 


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.