Forskning
Udskriv Udskriv
Switch language
Region Hovedstaden - en del af Københavns Universitetshospital
Udgivet

Upper Airway Classification in Sleep Endoscopy Examinations using Convolutional Recurrent Neural Networks

Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  1. Automatic Segmentation to Cluster Patterns of Breathing in Sleep Apnea

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  2. Polysomnographic Plethysmography Excursions are Reduced in Obese Elderly Men

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  3. Prediction of severe adverse event from vital signs for post-operative patients

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  4. Semi-Supervised Analysis of the Electrocardiogram Using Deep Generative Models

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  5. Deep transfer learning for improving single-EEG arousal detection

    Publikation: Bidrag til tidsskriftKonferenceartikelpeer review

  1. Inter-observer and intra-observer agreement in drug-induced sedation endoscopy — a systematic approach

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  2. Sleep-disordered breathing and cerebral small vessel disease-acute and 6 months after ischemic stroke

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  3. Age estimation from sleep studies using deep learning predicts life expectancy

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

  4. The sequences of 150,119 genomes in the UK Biobank

    Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

Vis graf over relationer

Assessing the upper airway (UA) of obstructive sleep apnea patients using drug-induced sleep endoscopy (DISE) before potential surgery is standard practice in clinics to determine the location of UA collapse. According to the VOTE classification system, UA collapse can occur at the velum (V), oropharynx (O), tongue (T), and/or epiglottis (E). Analyzing DISE videos is not trivial due to anatomical variation, simultaneous UA collapse in several locations, and video distortion caused by mucus or saliva. The first step towards automated analysis of DISE videos is to determine which UA region the endoscope is in at any time throughout the video: V (velum) or OTE (oropharynx, tongue, or epiglottis). An additional class denoted X is introduced for times when the video is distorted to an extent where it is impossible to determine the region. This paper is a proof of concept for classifying UA regions using 24 annotated DISE videos. We propose a convolutional recurrent neural network using a ResNet18 architecture combined with a two-layer bidirectional long short-term memory network. The classifications were performed on a sequence of 5 seconds of video at a time. The network achieved an overall accuracy of 82% and F1-score of 79% for the three-class problem, showing potential for recognition of regions across patients despite anatomical variation. Results indicate that large-scale training on videos can be used to further predict the location(s), type(s), and degree(s) of UA collapse, showing potential for derivation of automatic diagnoses from DISE videos eventually.

OriginalsprogEngelsk
TidsskriftProceedings of the International Conference of the IEEE Engineering in Medicine and Biology Society
Vol/bind2021
Sider (fra-til)3957-3960
Antal sider4
ISSN2375-7477
DOI
StatusUdgivet - nov. 2021

ID: 72155964