作者: Henry H. Li , Joseph R. Abraham , Duriye Damla Sevgi , Sunil K. Srivastava , Jenna M. Hach
DOI: 10.1167/TVST.9.2.52
关键词:
摘要: Purpose Numerous angiographic images with high variability in quality are obtained during each ultra-widefield fluorescein angiography (UWFA) acquisition session. This study evaluated the feasibility of an automated system for image classification and selection using deep learning. Methods The training set was comprised 3543 UWFA images. Ground-truth assessed by expert review classified into one four categories (ungradable, poor, good, or best) based on contrast, field view, media opacity, obscuration from external features. Two test sets, including randomly selected 392 separated independent balanced composed 50 ungradable/poor good/best images, model performance bias. Results In assessment showed overall accuracy 89.0% 94.0% distinguishing between gradable ungradable sensitivity 90.5% 98.6% specificity 87.0% 81.5%, respectively. receiver operating characteristic curve measuring two-class (ungradable gradable) had area under 0.920 0.980 set. Conclusions A learning demonstrates automatic quality. Clinical application this might greatly reduce manual grading workload, allow quality-based presentation to clinicians, provide near-instantaneous feedback photographers. Translational Relevance tool may significantly clinical- research-related work, providing instantaneous reliable