Research
Print page Print page
Switch language
The Capital Region of Denmark - a part of Copenhagen University Hospital
Published

Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception

Research output: Contribution to journalJournal articleResearchpeer-review

  1. A Contrast-Adaptive Method for Simultaneous Whole-Brain and Lesion Segmentation in Multiple Sclerosis

    Research output: Contribution to journalJournal articleResearchpeer-review

  2. Transcranial magnetic stimulation and magnetic resonance spectroscopy: opportunities for a bimodal approach in human neuroscience

    Research output: Contribution to journalJournal articleResearchpeer-review

  3. On the cortical connectivity in the macaque brain: a comparison of diffusion tractography and histological tracing data

    Research output: Contribution to journalJournal articleResearchpeer-review

  1. Effects of sensorineural hearing loss on cortical synchronization to competing speech during selective attention

    Research output: Contribution to journalJournal articleResearchpeer-review

  2. Cortical oscillations and entrainment in speech processing during working memory load

    Research output: Contribution to journalJournal articleResearchpeer-review

  3. Multiway canonical correlation analysis of brain data

    Research output: Contribution to journalJournal articleResearchpeer-review

  4. A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Research output: Contribution to journalJournal articleResearchpeer-review

  5. Decoding the auditory brain with canonical component analysis

    Research output: Contribution to journalJournal articleResearchpeer-review

  • Enea Ceolini
  • Jens Hjortkjær
  • Daniel D E Wong
  • James O'Sullivan
  • Vinay S Raghavan
  • Jose Herrero
  • Ashesh D Mehta
  • Shih-Chii Liu
  • Nima Mesgarani
View graph of relations

Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS) in which the information about the attended speech, as decoded from the subject's brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.

Original languageEnglish
Article number117282
JournalNeuroImage
Volume223
Pages (from-to)1-12
Number of pages12
ISSN1053-8119
DOIs
Publication statusPublished - Dec 2020

    Research areas

  • Cognitive control, Deep learning, EEG, Hearing aid, Neuro-steered, Speech separation

ID: 60727470