Research
Print page Print page
Switch language
The Capital Region of Denmark - a part of Copenhagen University Hospital
Published

Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

APA

CBE

MLA

Vancouver

Author

Bibtex

@article{fdd897c679d648b588ff41f4e32c8f2d,
title = "Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs",
abstract = "BACKGROUND: Nonophthalmologist physicians do not confidently perform direct ophthalmoscopy. The use of artificial intelligence to detect papilledema and other optic-disk abnormalities from fundus photographs has not been well studied.METHODS: We trained, validated, and externally tested a deep-learning system to classify optic disks as being normal or having papilledema or other abnormalities from 15,846 retrospectively collected ocular fundus photographs that had been obtained with pharmacologic pupillary dilation and various digital cameras in persons from multiple ethnic populations. Of these photographs, 14,341 from 19 sites in 11 countries were used for training and validation, and 1505 photographs from 5 other sites were used for external testing. Performance at classifying the optic-disk appearance was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard of clinical diagnoses by neuro-ophthalmologists.RESULTS: The training and validation data sets from 6779 patients included 14,341 photographs: 9156 of normal disks, 2148 of disks with papilledema, and 3037 of disks with other abnormalities. The percentage classified as being normal ranged across sites from 9.8 to 100%; the percentage classified as having papilledema ranged across sites from zero to 59.5%. In the validation set, the system discriminated disks with papilledema from normal disks and disks with nonpapilledema abnormalities with an AUC of 0.99 (95% confidence interval [CI], 0.98 to 0.99) and normal from abnormal disks with an AUC of 0.99 (95% CI, 0.99 to 0.99). In the external-testing data set of 1505 photographs, the system had an AUC for the detection of papilledema of 0.96 (95% CI, 0.95 to 0.97), a sensitivity of 96.4% (95% CI, 93.9 to 98.3), and a specificity of 84.7% (95% CI, 82.3 to 87.1).CONCLUSIONS: A deep-learning system using fundus photographs with pharmacologically dilated pupils differentiated among optic disks with papilledema, normal disks, and disks with nonpapilledema abnormalities. (Funded by the Singapore National Medical Research Council and the SingHealth Duke-NUS Ophthalmology and Visual Sciences Academic Clinical Program.).",
keywords = "Algorithms, Area Under Curve, Datasets as Topic, Deep Learning, Diagnosis, Differential, Fundus Oculi, Humans, Neural Networks, Computer, Ophthalmoscopy/methods, Papilledema/diagnosis, Photography, Predictive Value of Tests, ROC Curve, Retina/diagnostic imaging, Retrospective Studies, Sensitivity and Specificity",
author = "Dan Milea and Najjar, {Raymond P} and Jiang Zhubo and Daniel Ting and Caroline Vasseneix and Xinxing Xu and {Aghsaei Fard}, Masoud and Pedro Fonseca and Kavin Vanikieti and Lagr{\`e}ze, {Wolf A} and {La Morgia}, Chiara and Cheung, {Carol Y} and Steffen Hamann and Christophe Chiquet and Nicolae Sanda and Hui Yang and Mejico, {Luis J} and Marie-B{\'e}n{\'e}dicte Rougier and Richard Kho and {Thi Ha Chau}, Tran and Shweta Singhal and Philippe Gohier and Catherine Clermont-Vignal and Ching-Yu Cheng and Jonas, {Jost B} and Patrick Yu-Wai-Man and Fraser, {Clare L} and Chen, {John J} and Selvakumar Ambika and Miller, {Neil R} and Yong Liu and Newman, {Nancy J} and Wong, {Tien Y} and Val{\'e}rie Biousse and {BONSAI Group} and Karlesand, {Anna Isabelle}",
note = "Copyright {\textcopyright} 2020 Massachusetts Medical Society.",
year = "2020",
month = apr,
day = "30",
doi = "10.1056/NEJMoa1917130",
language = "English",
volume = "382",
pages = "1687--1695",
journal = "New England Journal of Medicine",
issn = "0028-4793",
publisher = "Massachusetts Medical Society",
number = "18",

}

RIS

TY - JOUR

T1 - Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs

AU - Milea, Dan

AU - Najjar, Raymond P

AU - Zhubo, Jiang

AU - Ting, Daniel

AU - Vasseneix, Caroline

AU - Xu, Xinxing

AU - Aghsaei Fard, Masoud

AU - Fonseca, Pedro

AU - Vanikieti, Kavin

AU - Lagrèze, Wolf A

AU - La Morgia, Chiara

AU - Cheung, Carol Y

AU - Hamann, Steffen

AU - Chiquet, Christophe

AU - Sanda, Nicolae

AU - Yang, Hui

AU - Mejico, Luis J

AU - Rougier, Marie-Bénédicte

AU - Kho, Richard

AU - Thi Ha Chau, Tran

AU - Singhal, Shweta

AU - Gohier, Philippe

AU - Clermont-Vignal, Catherine

AU - Cheng, Ching-Yu

AU - Jonas, Jost B

AU - Yu-Wai-Man, Patrick

AU - Fraser, Clare L

AU - Chen, John J

AU - Ambika, Selvakumar

AU - Miller, Neil R

AU - Liu, Yong

AU - Newman, Nancy J

AU - Wong, Tien Y

AU - Biousse, Valérie

AU - BONSAI Group

A2 - Karlesand, Anna Isabelle

N1 - Copyright © 2020 Massachusetts Medical Society.

PY - 2020/4/30

Y1 - 2020/4/30

N2 - BACKGROUND: Nonophthalmologist physicians do not confidently perform direct ophthalmoscopy. The use of artificial intelligence to detect papilledema and other optic-disk abnormalities from fundus photographs has not been well studied.METHODS: We trained, validated, and externally tested a deep-learning system to classify optic disks as being normal or having papilledema or other abnormalities from 15,846 retrospectively collected ocular fundus photographs that had been obtained with pharmacologic pupillary dilation and various digital cameras in persons from multiple ethnic populations. Of these photographs, 14,341 from 19 sites in 11 countries were used for training and validation, and 1505 photographs from 5 other sites were used for external testing. Performance at classifying the optic-disk appearance was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard of clinical diagnoses by neuro-ophthalmologists.RESULTS: The training and validation data sets from 6779 patients included 14,341 photographs: 9156 of normal disks, 2148 of disks with papilledema, and 3037 of disks with other abnormalities. The percentage classified as being normal ranged across sites from 9.8 to 100%; the percentage classified as having papilledema ranged across sites from zero to 59.5%. In the validation set, the system discriminated disks with papilledema from normal disks and disks with nonpapilledema abnormalities with an AUC of 0.99 (95% confidence interval [CI], 0.98 to 0.99) and normal from abnormal disks with an AUC of 0.99 (95% CI, 0.99 to 0.99). In the external-testing data set of 1505 photographs, the system had an AUC for the detection of papilledema of 0.96 (95% CI, 0.95 to 0.97), a sensitivity of 96.4% (95% CI, 93.9 to 98.3), and a specificity of 84.7% (95% CI, 82.3 to 87.1).CONCLUSIONS: A deep-learning system using fundus photographs with pharmacologically dilated pupils differentiated among optic disks with papilledema, normal disks, and disks with nonpapilledema abnormalities. (Funded by the Singapore National Medical Research Council and the SingHealth Duke-NUS Ophthalmology and Visual Sciences Academic Clinical Program.).

AB - BACKGROUND: Nonophthalmologist physicians do not confidently perform direct ophthalmoscopy. The use of artificial intelligence to detect papilledema and other optic-disk abnormalities from fundus photographs has not been well studied.METHODS: We trained, validated, and externally tested a deep-learning system to classify optic disks as being normal or having papilledema or other abnormalities from 15,846 retrospectively collected ocular fundus photographs that had been obtained with pharmacologic pupillary dilation and various digital cameras in persons from multiple ethnic populations. Of these photographs, 14,341 from 19 sites in 11 countries were used for training and validation, and 1505 photographs from 5 other sites were used for external testing. Performance at classifying the optic-disk appearance was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard of clinical diagnoses by neuro-ophthalmologists.RESULTS: The training and validation data sets from 6779 patients included 14,341 photographs: 9156 of normal disks, 2148 of disks with papilledema, and 3037 of disks with other abnormalities. The percentage classified as being normal ranged across sites from 9.8 to 100%; the percentage classified as having papilledema ranged across sites from zero to 59.5%. In the validation set, the system discriminated disks with papilledema from normal disks and disks with nonpapilledema abnormalities with an AUC of 0.99 (95% confidence interval [CI], 0.98 to 0.99) and normal from abnormal disks with an AUC of 0.99 (95% CI, 0.99 to 0.99). In the external-testing data set of 1505 photographs, the system had an AUC for the detection of papilledema of 0.96 (95% CI, 0.95 to 0.97), a sensitivity of 96.4% (95% CI, 93.9 to 98.3), and a specificity of 84.7% (95% CI, 82.3 to 87.1).CONCLUSIONS: A deep-learning system using fundus photographs with pharmacologically dilated pupils differentiated among optic disks with papilledema, normal disks, and disks with nonpapilledema abnormalities. (Funded by the Singapore National Medical Research Council and the SingHealth Duke-NUS Ophthalmology and Visual Sciences Academic Clinical Program.).

KW - Algorithms

KW - Area Under Curve

KW - Datasets as Topic

KW - Deep Learning

KW - Diagnosis, Differential

KW - Fundus Oculi

KW - Humans

KW - Neural Networks, Computer

KW - Ophthalmoscopy/methods

KW - Papilledema/diagnosis

KW - Photography

KW - Predictive Value of Tests

KW - ROC Curve

KW - Retina/diagnostic imaging

KW - Retrospective Studies

KW - Sensitivity and Specificity

U2 - 10.1056/NEJMoa1917130

DO - 10.1056/NEJMoa1917130

M3 - Journal article

C2 - 32286748

VL - 382

SP - 1687

EP - 1695

JO - New England Journal of Medicine

JF - New England Journal of Medicine

SN - 0028-4793

IS - 18

ER -

ID: 61987780