Picking out one voice from many at a crowded party is a challenge for assistive hearing devices. Now, Cong Han and colleagues have developed a new speech separation system that automatically separates audio from different speakers in a crowded environment and compares these voices to the listener's brainwaves, so that the voice of the speaker who is the center of the listener's attention sounds the loudest. The system is suited to real-world situations where the device hasn't been trained previously using the specific voices of speakers. Some assistive hearing devices can suppress background noises in a "crowded" acoustic environment, but this solution doesn't work in a situation like the noisy party, where the device doesn't know which background voices to amplify (the speaker who the person is listening to) and which to lower (everyone else). Combining speech processing technology and a brain-computer interface, Han et al. created a new algorithm that first separates the voices of speakers in a multi-talker audio environment, then compares the audio "fingerprint" or spectrogram of those voices against a spectrogram that is reconstructed from neural responses in a listener's auditory cortex, to find out which one belongs to the person the listener is focusing on. This speaker's voice is then amplified before delivering the sound to the listener. While the current system uses invasive measures to determine the listener neural response, the researchers say noninvasive measurements of brainwaves through the scalp or over the ear could be used as well.