[ Back to EurekAlert! ] Public release date: 6-Nov-2007
[ | E-mail Share Share ]

Contact: Jason Bardi
jbardi@aip.org
301-209-3091
American Institute of Physics

Highlights of upcoming acoustics meeting in New Orleans

Whale songs, hurricanes, sound medicine, dams, drums and all that jazz

Nov. 2, 2007 -- A gumbo of great science will be served at the 154th meeting of the Acoustical Society of America (ASA) from Tuesday, Nov. 27 through Friday, Nov. 30. ASA meeting papers are among the most accessible and diverse of any scientific meeting. Scientists and engineers will convene to present some 600 talks and posters on topics in acoustics related to fields as diverse as psychology, physics, sound engineering, marine biology, medicine, meteorology, and music. Some specific highlights:

1) ACOUSTICS AND BRAIN CANCER

Doctors often treat brain cancers with a combination of radiation therapy and surgery—basically removing part of the skull and excising the tumor. When tumors are surgically removed, doctors will often implant a thin, drug-encapsulated wafer before replacing the skull that diffuses chemotherapy agent over time to help ensure that no remaining tumor cells survive.

This approach is too often unsuccessful, and brain cancers like neuroblastomas and neurofibromatosis remain the leading cause of cancer-related death in people under the age of 35. Part of the problem may be that cancerous cells migrate beyond the range of the slowly diffusing drugs.

Now George Lewis Jr. (george@cornellbme.com) and his colleagues at Cornell University are testing the use of acoustic pulses to help brain tissue absorb chemotherapy drugs faster. Using various pulse sequences, they showed that focused ultrasound could enhance the uptake of cancer drugs in brain-like tissues. They believe that focused ultrasound agitates the tissues matrices causing enhanced permeability for the drug, and by mechanically pushing it with radiation forces where the acoustical waves are focused. The drugs can then spread further and faster into the tissues than by unassisted diffusion alone.

The talk “Acoustic targeted drug delivery in neurological tissue” (3aBB5) will be at 9:15 a.m. on Thursday, Nov. 29.

2) ACOUSTIC “MINISCALPELS” FOR NON-INVASIVE SURGERY

The idea of making surgical incisions inside the body without ever opening or puncturing the skin in any way might sound like something out of a science fiction novel, but the technology may be just around the corner. High-intensity ultrasound may someday allow acoustical waves to be focused into the world’s tiniest scalpels.

Zhen Xu (zhenx@eecs.umich.edu) and colleagues at the University of Michigan are using high-intensity ultrasound pulses to test whether they can deliver power without heating to tissues deep within the body. The energetic waves delivered by the high-intensity ultrasound cause microbubbles to form at the focus. These bubbles expand and collapse forcefully and the energetic bubble activity can mechanically fragment tissues—presumably because cell membranes cannot withstand the pressure caused by the bubbles.

Currently the acoustical beams can be focused into a cluster of “miniscalpels” about the size of an individual cell. The action of these beams can be easily controlled and maneuvered electronically using a computer mouse or joystick. Moreover, the surgery can be precisely targeted and monitored in real time because the microbubbles themselves are easily spotted via conventional ultrasound or MRI.

The talk, “Image-guided cavitational ultrasound therapy histotripsy” (4pBB4) will be at 2:30 p.m. on Friday, Nov 30

3) DAM TOMOGRAPHY

A new way to look at dams with acoustic tomography -- a large-scale equivalent of the acoustic scans done on human bodies -- will help to catch failures before they happen. Nearly 11,000 earth dams have been built in the United States in the past half century. When these dams fail, there are many causes, including erosion, excessive water seepage, and mudslides.

Seismic imaging involves sending sound waves into the ground and then detecting them at different locations along the surface of the ground. An image of the problem area of the dam, below ground and away from visual inspection, is constructed using the time it takes for the sound waves to travel different distances through the dam. Craig Hickey of the National Center for Physical Acoustics at the University of Mississippi (chickey@olemiss.edu) will report on the latest advances in this approach by looking at studies performed at the Drewery Lake Dam in Mississippi.

The talk, “Seismic Refraction Tomography of a Small Earth Dam” (4aPA10) will be at 10:15 a.m. on Friday, Nov. 30.

4) PUBLIC LECTURE: THE ART AND SCIENCE OF JAZZ

One of the popular sessions at past ASA meetings has been the concert/talk—a brief public lecture introducing the theory of sound and music followed by a performance by talented musicians who put it all together. Because this year’s meeting is located in the uniquely musical city of New Orleans, the venue provides a great opportunity to take advantage of the city’s rich music history and culture.

Uwe J. Hansen of Indiana State University (uhansen@isugw.indstate.edu) will analyze the harmonic overtones and sound qualities of percussion, string, piano, and woodwind or brass instruments. After each analysis, the Marlon Jordan Quartet (http://www.marlonjordan.com), a local jazz combo, will play a number, featuring that instrument. Their musical performance will comprise the major portion of the session.

The talk and concert “Musical Acoustics: Science and Performance” (3pMU1) will be at 7:00 p.m. on Thursday, November 29th, 2007 in the Gallery of the Sheraton New Orleans Hotel at 500 Canal Street. The session is free and open to the public.

5) INVENTOR OF DIGITAL MUSIC HONORED

Max Mathews rocked the world with his digital music. No, really. In 1968, Mathews designed a program that would process digital code into acoustic waves with varying frequencies and amplitudes, the components of notes and loudness (called unit generators, which will be explained by Julius O. Smith III, Stanford University, jos@ccrma.stanford.edu session 4pMU3). Another talk will describe the increasingly complex computer music technologies Mathews built from his initial program, such as the GROOVE system for live computer music performance (F. Richard Moore, University of California, San Diego, frm@ucsd.edu, session 4pMU2).

Matthews’ most sophisticated instrument, dubbed the Radio Baton (John Chowning and Maureen Chowning, Stanford University, jc@ccrma.stanford.edu 4pMU7) uses the position of two wands and a flat receiving surface to generate musical tones. The talk will include samples of music generated by that instrument. The music technology moves into the future with the placement of music generating software on the laptops in the One-Laptop-Per Child program (Barry Vercoe, MIT Media Laboratory, bv@media.mit.edu session 4pMU4) and via distributed musical instruments (Chris Chafe, Stanford University, cc@ccrma.stanford.edu, session 4pMU5.) The session will conclude with a computer music concert.

The session "Musical Acoustics and Speech Communication: Session in Honor of Max Mathews" is slated for 1:30-5:30 p.m. on Friday, Nov. 30.

6) GET RID OF THE EDGE RING WITH HOLOGRAMS

Drummers are often beset by a high-pitched sound coming from their snare drums. From swathing the edges in duct tape to installing felt and steel muffles, musicians have tried many ways to get rid of the mosquito-y sound. Now acousticians have found exactly where that sound comes from. They took three-dimensional holographic images of the vibrating drum head. Analyzed by computer and translated to a television image, the scientists found in the images that the pitch stands out because it is just a little bit more than an octave higher than the sound that the drum makes. That "little bit" strikes the ear differently from the rest of the sound. In finding the source of the sound, acousticians can also provide solutions for eliminating it – ones that are probably less sticky than duct tape. (Barry Larkin, Iowa State University, blarkin@aistate.edu).

The talk, "Vibration modes of the snare drum batter head" (4aMU11) will be at 10:45 a.m. on Friday, Nov. 30.

7) CAR STEREO-TYPES

New tests of musical acoustics within automobiles have uncovered an unexpected difference in how young men and young women rate sound systems. In the German studies, the subjects listened to various kinds of music and then, using a personal digital assistant (PDA-a kind of handheld computer), rated the sound quality in acoustical terms such as dull/bright, pure/impure, natural/unnatural, bad/good. Hugo Fastl (fastl@mmk.ei.tum.de), head of the acoustics group of the Institute of Human-Machine Interaction at the Technische Universitaet Muenchen, reports that at mid and upper frequency ranges, women’s and men’s responses were similar, but at lower frequencies, women were more sensitive to (and negative about) booming sounds.

The talk, “Psychoacoustic evaluation of music reproduction in passenger cars” (1pAA1) will be at 1:05 p.m. on Friday, Nov 30.

8) TISSUE STIFFNESS AS A MEASURE OF A HEALTH

Monitoring a tissue’s material properties may not be as obvious a gauge of its health as looking at its biological or chemical properties, but changes to these properties can be a good indicator of disease. Areas of stiffness in a tissue, for instance, are often a good warning sign of cancer—the basic premise behind breast self-examination. Likewise when cancerous tumors form on the liver or another one of the body’s organs, they are often stiffer than the surrounding tissues because there are more blood vessels to support the tumors. The problem is, how can you measure stiffness in tissues deep within the body" There is no such thing as a liver self-exam.

Matthew Urban (Urban.Matthew@mayo.edu) and his colleagues at the Mayo Clinic College of Medicine are designing ways to measure the stiffness of tissues as a non-invasive diagnostic tool. Urban will present his latest experiments in which he and his colleagues used focused ultrasound waves to deliver tiny vibration to a steel sphere encased in gelatin—a model of a tissue with a stiff lesion. They were able to measure the frequency response of the sphere to acoustical waves of multiple frequencies, which can be used to determine the stiffness of the tissue-mimicking material. The method also provides new ways to non-invasively cause vibration for assessment of tissue stiffness without the presence of the steel sphere.

Moreover, they were able to deliver the energy to the sphere without heating the surrounding gelatin. This is one of the challenges of using highly focused ultrasound, because acoustical energy can be absorbed by nearby tissues in the form of heat.

The talk, “Modulated ultrasound and multifrequency radiation force” (3pBB1) will be at 1:15 p.m. on Thursday, Nov. 29.

8) DIGITALLY ENHANCED PRACTICE

Musical practice rooms, even when they achieve the necessary sound-isolating qualities, are acoustically poor on the inside. Specifically, practice rooms are often dry and absorptive, which means musicians often overplay in an attempt to hear their “true” sound. A musician would naturally want to practice in an environment that matches performance conditions as closely as possible. This dilemma might be solved by new digital acoustic devices. The use of Digital Signal Processing, by which sound waves are detected (in analog form), digitized, and processed (to add or remove certain sounds), might be the answer. At first, in the late 1980s and early 1990s, DSP was used primarily in recording applications and in enhancing the acoustics in historic theaters. With DSP, the effect of performing in a large room can now be simulated even in a small practice room. Ron Freiheit of the Wenger Corporation in Minnesota (ron.freiheit@wengercorp.com) will summarize the latest innovations in DSP systems for practice rooms.

The talk, “Changing the Paradigm of Practice Rooms and Music Teachers Coaching Studios” (1pAA2) will be at 1:25 p.m. on Friday, Nov. 30.

9) WHALES SING TO EXPLORE, NOT TO MATE

Enter a humpback whale, singing. Is he looking for a mate" A new idea by an acoustician suggests he is listening to explore the world around him. A humpback whale's song changes after he moves to a new location to match the sounds coming from the other whales nearby. The fact that he learns the song from his peers has long suggested to researchers that he's doing what birds do, which is learn from their friends the songs that will attract potential mates. However, a talk by Eduardo Mercado of the State University of New York in Buffalo (emiii@buffalo.edu) will suggest that the whale is using the song to broadcast his position and find other whales. He may copy the songs of other nearby whales to improve his own ability to track those singers, which may help him find a mate, but he'll have to learn to get lucky on his own.

The talk, “Why humpback whales learn songs: Courting or ranging"” (2aPA10) will be at 4:25 p.m. on Wednesday, Nov. 28.

10) RECREATING THE WORLD INSIDE YOUR HEAD.

The “cocktail party” effect -- how we can try to make sense of a conversation at a crowded party even as several other potentially distracting conversations proceed at the same time -- might now have a scientific explanation. New brain scans using functional MRI (fMRI) are helping researchers to understand how the brain segregates objects in space when a person hears, but not necessarily sees, multiple sources of sound. At Kourosh Saberi's lab at the University of California, Irvine, human subjects are exposed to several sounds. Sometimes the sounds come from different locations near the subject, while sometimes several sounds come from a single location.

When looking at fMRI scans showing areas of enhanced blood flow, which provides 2-mm-resolution maps of brain activity, the U.C. Irvine scientists report two main results. First, no specific brain region accounts exclusively for identifying auditory motion, in contrast to the visual cortex which does have specific motion-sensing regions. And second, spatial auditory information seems to be processed in a neural region, called the Planum Temporale, in a way that can facilitate the segregation of multiple sound sources. Saberi (saberi@uci.edu) and his colleagues are the first to use individualized virtual-reality sounds in an fMRI environment to reproduce a naturalistic acoustic experience for studying brain function.

The talk, “Neuroimaging correlates of spatial cues in speech and noise stimuli” (2aPP8) will be at 10:35 a.m. on Friday, Nov. 30.

11) HOW HUMAN SOUND AFFECTS ANIMALS

A lot of humans don't want to live near human-made noise like airports, noisy city buses or military training ranges. But what about birds and fish" They have fewer real estate options, and scientists want to know what effect sound can have on the habitats and behavior of the animals that live near human-produced sound. One talk will show that helicopter noise did not affect the mating behavior or number of offspring of a bird living near an air station (Don Hunskaer, Hubbs-Sea World Research Institute, dhunsaker@hswri.org, session 4aNS6). Another will examine the change in bowhead whale calling behaviors due to noise from boats and from an artificial oil production island in the whales' migration pathway (Susanna Blackwell, Greeneridge Sciences, Inc., susanna@greeneridge.com session 4aNS7). To keep track of the behaviors of different animals that human sound can affect, a database is coordinated by the University of Rhode Island's Coastal Institute. The Marine Wildlife Behavior Database (Kathleen Vigness Raposa, Marine Acoustics, Inc., kathleen.vigness@marineacoustics.com) could help companies plan acoustically animal-friendly projects.

The session, "Noise and Animal Bioacoustics: Advances in Measurement of Noise and Noise Effects in Animals and Humans in the Environment I." will be from 8:00 a.m. to noon on Friday, Nov. 30.

12) TUTORIAL LECTURE ON WEATHER AND ACOUSTICS

The relationships between sound and weather can be fascinating, frightening, useful and at times mystifying. This tutorial lecture explores the range of intersections between weather and acoustics. Weather can affect acoustic environments causing noise increases, noise reductions, and sound focusing. One aspect of this tutorial reviews results from propagation modeling, indicating that under some conditions the atmosphere can produce vertical wave guides. Conversely, sound can be used to actively interrogate the atmosphere and provide information valuable for weather prediction and warning. Probing capabilities reviewed, with examples, show that wind profiles, temperature profiles, wind shears, gravity waves and inversions can be defined acoustically.

There are also possibilities for monitoring other difficult-to-observe parameters such as humidity profiles. In addition, weather processes can generate sound, detectable at long ranges using lower frequencies.Observing networks have monitored infrasound from a growing number of meteorological events (e.g., severe weather, tornadoes, funnels aloft, atmospheric turbulence, hurricanes, and avalanches). Efforts to develop an infrasonic tornado detection system are described in some detail. Results indicate promise to help improve tornado detection and warning lead times, while reducing false alarms. Clear opportunities exist for infrasonic systems to provide operational weather data.

A tutorial presentation on "Weather and Acoustics" will be given by Alfred Bedard of the National Oceanic and Atmospheric Administration at 7:00 p.m. on Tuesday, Nov. 27.

WORLD WIDE PRESS ROOM

Even if you can't leave your desk, you can visit ASA's "World Wide Press Room" before and during the meeting. By the week of November 15, the site (http://www.acoustics.org/press) will be updated for the New Orleans meeting and will include numerous lay-language versions of selected meeting papers. These papers will enable you to cover the meeting. A detailed release describing the World Wide Press Room will be sent out in mid-November.

PRESS ATTENDANCE

While there are no press conferences scheduled for the meeting, AIP and ASA staff will be onsite to help you set up interviews with presenters, obtain background information on a story, and provide any other resources you may need. If you would like to attend or want to receive additional information, please contact Jason Bardi at jbardi@aip.org or (cell) 858-775-4080. During the meeting, the on-site press contact will be Martha Heil mheil@aip.org or (cell) 626-354-5613.

GENERAL INFORMATION

More information about the meeting is at http://asa.aip.org/neworleans/information.html. Reporters can download meeting abstracts at http://asa.aip.org/neworleans/program.html. For the latest information about the city of New Orleans, please visit the New Orleans Convention and Visitors Bureau website at http://www.neworleanscvb.com/.

HOTEL INFORMATION

The Sheraton New Orleans Hotel is the headquarters hotel. Reservations information can be made by phone by calling 504-525-2500 (toll free: 1-888-627-7033), or online at: http://www.starwoodmeeting.com/StarGroupsWeb/res?id=0705236872&key=37743

###

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA

The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its approximately 7000 worldwide members represent a broad spectrum of the study of acoustics. ASA publications include the Journal of the Acoustical Society of America, Acoustics Today magazine, books and standards on acoustics. ASA holds two major scientific meetings each year. It is a member society of the American Institute of Physics. For more information about the Society, please visit ASA's Web site at http://asa.aip.org.



[ Back to EurekAlert! ] [ | E-mail Share Share ]

 


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.