MULTI-DEEP: A novel CAD system for coronavirus (COVID-19) diagnosis from CT images using multiple convolution neural networks

作者: Omneya Attallah , Dina A. Ragab , Maha Sharkas

DOI: 10.7717/PEERJ.10086

关键词: Support vector machineSegmentationDeep learningCADArtificial intelligencePrincipal component analysisArtificial neural networkClassifier (UML)Convolutional neural networkPattern recognitionComputer science

摘要: Coronavirus (COVID-19) was first observed in Wuhan, China, and quickly propagated worldwide. It is considered the supreme crisis of present era one most crucial hazards threatening worldwide health. Therefore, early detection COVID-19 essential. The common way to detect reverse transcription-polymerase chain reaction (RT-PCR) test, although it has several drawbacks. Computed tomography (CT) scans can enable suspected patients, however, overlap between patterns other types pneumonia makes difficult for radiologists diagnose accurately. On hand, deep learning (DL) techniques especially convolutional neural network (CNN) classify non-COVID-19 cases. In addition, DL that use CT images deliver an accurate diagnosis faster than RT-PCR which consequently saves time disease control provides efficient computer-aided (CAD) system. shortage publicly available datasets images, CAD system's design a challenging task. systems literature are based on either individual CNN or two-fused CNNs; used segmentation classification diagnosis. this article, novel system proposed diagnosing fusion multiple CNNs. First, end-to-end performed. Afterward, features extracted from each individually classified using support vector machine (SVM) classifier. Next, principal component analysis applied feature set, network. Such sets then train SVM classifier individually. selected number components set fused compared with CNN. results show effective capable detecting distinguishing cases accuracy 94.7%, AUC 0.98 (98%), sensitivity 95.6%, specificity 93.7%. Moreover, efficient, as fusing reduced computational cost final model by almost 32%.

参考文章(50)
Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation computer vision and pattern recognition. pp. 580- 587 ,(2014) , 10.1109/CVPR.2014.81
Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition Proceedings of the IEEE. ,vol. 86, pp. 2278- 2324 ,(1998) , 10.1109/5.726791
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition computer vision and pattern recognition. pp. 770- 778 ,(2016) , 10.1109/CVPR.2016.90
Hayit Greenspan, Bram van Ginneken, Ronald M. Summers, Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique IEEE Transactions on Medical Imaging. ,vol. 35, pp. 1153- 1159 ,(2016) , 10.1109/TMI.2016.2553401
Daniele Ravì, Charence Wong, Fani Deligianni, Melissa Berthelot, Javier Andreu-Perez, Benny Lo, Guang-Zhong Yang, None, Deep Learning for Health Informatics. biomedical and health informatics. ,vol. 21, pp. 4- 21 ,(2017) , 10.1109/JBHI.2016.2636665
Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet classification with deep convolutional neural networks Communications of The ACM. ,vol. 60, pp. 84- 90 ,(2017) , 10.1145/3065386
Dina A. Ragab, Maha Sharkas, Stephen Marshall, Jinchang Ren, Breast cancer detection using deep convolutional neural networks and support vector machines PeerJ. ,vol. 7, ,(2019) , 10.7717/PEERJ.6201
Ryad Zemouri, Noureddine Zerhouni, Daniel Racoceanu, Deep Learning in the Biomedical Applications: Recent and Future Status Applied Sciences. ,vol. 9, pp. 1526- ,(2019) , 10.3390/APP9081526
Maria José Sá, Abstracts from the 5th International Porto Congress of Multiple Sclerosis. Neurology and Therapy. ,vol. 8, pp. 1- 31 ,(2019) , 10.1007/S40120-019-0135-2
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun, ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6848- 6856 ,(2018) , 10.1109/CVPR.2018.00716