To view 2012 AAAS Annual Meeting press releases, please click here
Note: ALL information to be presented at the Annual Meeting will be strictly EMBARGOED - meaning, it cannot be published, broadcast, posted online or otherwise placed in the public domain until the start time of the news briefing or the presentation, whichever comes first.
All times are listed in Eastern Standard Time (EST)
NOTE: Not all speakers on this page have uploaded content.
* = Indicates speaker's presentation is part of a news briefing.
Long Island University, USA
This research is looking at American Sign Language, or ASL, in order to uncover information about its structure. ASL is not the same language as English—it has its own vocabulary and its own grammatical rules. Sign languages like ASL are natural languages, with all the linguistic structure and expressive potential of spoken languages, but less is known about how signs are produced and perceived in everyday signing. We know that some signs can be produced in very different ways, yet still be seen as the same sign. For example, sometimes the sign ‘why’ is produced with the hand next to the forehead, but it can also be produced with the hand lower and farther away from the body. (This is similar to how people can say the word ‘either’ in English with a ‘long e’ or a ‘long i’ sound, and either one would be understood as the same word.)
To understand this type of variation in ASL, we are collecting three-dimensional movement data as people are signing. In this way, we can compare many productions of the same signs in precise detail to see what remains consistent about them from one production to another. We are theorizing that the elements of a sign that are very consistent across productions contain the information that signers rely on to perceive signs correctly. Thus far, our research suggests that signs are modified in similar ways to how words are modified during fluent production. Signs and spoken words are influenced by where they occur in a sentence, by what signs or words surround them, and by how fast someone is signing or speaking.
|Related Research Papers|
|Sign Language Prosody||PDF - Acrobat Portable Doc|
|This is a research paper that was presented at the Speech Prosody conference in 2010.|
|Martha Tyrone's biography||DOC - Word Document|
|This is a narrative description of my research.|
University of British Columbia, Canada
This talk addresses issues of non-invasively measuring and analyzing coordination in expressive performance. Biological systems actively and ubiquitously coordinate their behavior. Skilled behaviors such as speech and musical performance require coordination of structures within and between individuals. When we speak, we coordinate elaborate structures such as the lungs, larynx (voice box), and vocal tract (oral and nasal cavities) to produce sounds. Additionally, motions of the head, the hands, and even the rest of the body are involved and can convey communicatively relevant information. The same type of coordination is seen in singing or the playing of a musical instrument, where the musician must coordinate many subsystems to produce sound.
Coordination between individuals during conversation may take many forms, including convergent word use, grammatical phrasing, accent, and visible gestures. Such coordination is generally unintentional and is not easily noticed. Coordination between musicians might appear to be a different problem because of the coordinating influence of a musical score. But, in fact, musicians playing in ensemble will introduce fluctuations that keep the music from sounding too "mechanical". That is, they are coordinated, but not perfectly synchronized. This avoidance of perfect synchrony, which enriches musical performances, may provide a common index of health in biological systems. Indeed, strict synchrony may be a sign of pathology, as it is in epilepsy, stuttering, Parkinson' Disease, and cardiology.
In the talk, we present a new way to compute coordination from time-varying signals associated with different structures within and across individuals. The method, which we call correlation map analysis (see accompanying technical paper by Barbosa et al., in press), affords both qualitative and quantitative analyses of time-varying behavior. We also present a non-invasive method of retrieving motion measures from video recordings, based on Optical Flow Analysis (Horn & Schunk 1981). The correlation analysis and optical flow techniques are exemplified for infant coordination during perception, coordination of the body and vocal tract during conversation for English and Plains Cree, different traditions of musical performance (Art Song, Plains Cree drumming), the coordination of large crowds, and sleep studies of children with Restless Leg Syndrome (RLS).
|Related Research Papers|
|avsp_2008 flow measures paper||PDF - Acrobat Portable Doc|
|Technical paper showcasing optical flow analysis|
|JASA article||PDF - Acrobat Portable Doc|
|Technical paper explaining correlation map analysis and giving in-depth qualitative analysis of Plains Cree speech and gesture data.|
|Correlation Tutorial - 1||MOV - Video|
|Correlation Tutorial should be viewed first using QuickTime 7 (not QT 10).|
|Vocal Tracts movie - 2||MOV - Video|
|Vocal Tracts movie provides a simple demonstration of the correlation map analysis (movie 1 and technical paper)|
|Optical Flow Tutorial - 3||MOV - Video|
|Optical Flow Tutorial shows steps of optical flow analysis. This should follow movies 1 and 2. Use QT 7. See also AVSP 2008 technical paper.|
|Plains Cree movie - 4||MOV - Video|
|Plains Cree movie should be played after the Optical Flow Tutorial -- shows analysis of speech and hand motion data. Use QT 7.|
|Conversation demo movie - 5||MOV - Video|
|Conversation demo movie shows optical flow and correlation map analysis for two speakers' body motion during conversation. Should follow the Optical Flow Tutorial (movie 3). Use QT 7|
|Eric Vatikiotis-Bateson Bio||DOC - Word Document|
Haskins Laboratories, USA
Communication, language, performance, and cognition are all shaped in varying ways by our embodiment (our physicality, including brain and body) and our embeddedness (our place in the world: physical, social, and cultural). This symposium brings together experts spanning linguistics, computer science, engineering, and psychology to describe new approaches for measuring and modeling movement, gesture, and coordination. The focus will be on the dynamic control of speech articulators, limbs, face, and body, and the coordination of movement and gesture, by and between individuals. Sidney Fels, a professor in the Department of Electrical and Computer Engineering at UBC, will discuss the creation of DIgital Ventriloquized Actors (DIVAs) that use hand gestures to synthesize speech and song by means of an intermediate conversion of these gestures to articulatory parameters of a voice synthesizer. Martha Tyrone, from Long Island University and Haskins Laboratories, will describe work on American Sign Language the focuses on its underlying gestures and timing investigated using optical motion capture. Eric Vatikiotis-Bateson, chair of Cognitive Systems at UBC, will demonstrate the ubiquity of spatial-temporal coordination within and between performing individuals with computational modeling and analyses of conversational data from English, Shona and Plains Cree, and the integration of posture, respiration, and vocalization in speech and song.
|Gesture, Language and Performance summary.||PDF - Acrobat Portable Doc|
|A summary/abstract of the symposium -- "Gesture, Language, and Performance: Aspects of Embodiment"|
|Philip Rubin's biography||PDF - Acrobat Portable Doc|
|Biography of Philip Rubin.|
University of British Columbia, Canada
In this talk I will present some of the latest developments from our research that is attempting to build a complete computer-based biomechanical model of the human vocal tract that can talk. This is called an Articulatory Speech Synthesizer as it mimics the same processes human use when they control their own vocal apparatus. We are connecting a hand gesture based device to directly control the articulatory speech synthesizer so that a person can speak and sing just by using their hands. This is like playing a musical instrument that plays voice. Applications could include new forms of musical expression and aids for people with speaking disabilities.
|Image of Marguerite Witvoet singing with the DIVA||JPG - Graphic|
|Image of Marguerite Witvoet singing with the DIVA during performance at ACM SIGCHI in Vancouver in 2011.|
|Related Research Papers|
|Creating a Biomechanical Model of the Oral, Pharyngeal and Laryngeal Complex for use in Speech Research||PDF - Acrobat Portable Doc|
|Provides background for the presentation. It has been published at IEICE 2011 in Fukuoka, Japan. The section on the DIVA is particularly useful.|
|Sid Fels Short Biography||PDF - Acrobat Portable Doc|
|Biography for Sid Fels|