Forskning
Udskriv Udskriv
Switch language
Region Hovedstaden - en del af Københavns Universitetshospital
Udgivet

Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  1. On the cortical connectivity in the macaque brain: a comparison of diffusion tractography and histological tracing data

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  2. Cerebellar - premotor cortex interactions underlying visuomotor adaptation

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  3. Accurate and robust whole-head segmentation from magnetic resonance images for individualized head modeling

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  4. Directed connectivity between primary and premotor areas underlying ankle force control in young and older adults

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  1. Effects of sensorineural hearing loss on cortical synchronization to competing speech during selective attention

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  2. Cortical oscillations and entrainment in speech processing during working memory load

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  3. Multiway canonical correlation analysis of brain data

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  4. A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  5. Decoding the auditory brain with canonical component analysis

    Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

  • Enea Ceolini
  • Jens Hjortkjær
  • Daniel D E Wong
  • James O'Sullivan
  • Vinay S Raghavan
  • Jose Herrero
  • Ashesh D Mehta
  • Shih-Chii Liu
  • Nima Mesgarani
Vis graf over relationer

Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS) in which the information about the attended speech, as decoded from the subject's brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.

OriginalsprogEngelsk
Artikelnummer117282
TidsskriftNeuroImage
Vol/bind223
Sider (fra-til)1-12
Antal sider12
ISSN1053-8119
DOI
StatusUdgivet - 20 aug. 2020

Bibliografisk note

Copyright © 2020. Published by Elsevier Inc.

ID: 60727470