Malevolent Melanoma diagnosis using Deep Convolution Neural Network
Karthiga M1, Priyadarshini R K2, Bazila Banu A3
Dept. of Computer Science and Engineering, Bannari Amman Institute of Technology,
Sathyamangalam, Tamilnadu, India
Computer based analysis has become a most important part in medical industry nowadays. Skin cancer has become a most dreadful disorders and it is can be diagnosed using computer based technique. Dermoscopy images can be used for training the classifier for correct prediction of melanoma. A novel methodology based on deep convolution neural network is utilized for absolute diagnosis of malevolent melanoma. Transfer learning technique is employed along with deep convolution neural network based Inception v3 framework. The outcomes are obtained by utilizing the proposed methodology with a total of 2700 dermoscopic images. Maximum rate of accuracy, sensitivity and specificity are obtained from the proposed implementation. The proposed results outperform the results for classification of skin lesions by dermatologists.
Feature extraction is an important part in classification. Various features can be considered for extraction among them color, geometric features, texture features, dermal features, ABCD rule features and contour features are some of the categories. Some constraints like border irregularity, diameter variation and color variation are resolved by ABCD rule features. Sensitivity and specificity can be used to estimate the clinical tests. Patients with disease are appropriately recognized from the clinical tests by sensitivity whereas specificity from the clinical tests appropriately recognizes the patients without disease as illustrated in the below equations 1, 2.
In this paper, two major problems in automatic diagnosis of skin melanoma are resolved. One is skin melanoma recognition problem and the other is correct judgment of melanoma. Transfer learning with deep convolution neural network based on Google’s Inception v3 framework is utilized for absolute prediction of skin melanoma. The dataset is collected from International Skin Imaging Collaboration (ISIC): Mellanoma Project.
Automated screening of melanoma is done using deep learning technique which simulates transfer learning. Image Net framework is used to train the datasets which has good quality features to classify the lesions . Skin mole imaging is another simultaneous technique which has encountered significant progressions because of progress of imaging sensors and preparing power. In any case, these plans use hand-created highlights which are hard to tune and perform ineffectively on new cases because of absence of speculation control. In this investigation trained deep neural network is used to naturally remove a lot of agent and to analyze an example of skin sore. The exploratory tests did on a clinical dataset demonstrate that the execution utilizing DNN-based highlights performs superior to the best in class procedures .This work exhibits a methodology for the diagnosis of melanoma from the images captured from dermoscope which integrates deep learning, support vector machine (SVM) learning and sparse coding.
One of the helpful parts of the proposed methodology is that unsupervised learning inside the area, and highlight exchange from the space of normal photos, dispenses with the need of clarified information in the objective assignment to adapt great highlights. The connected component exchange enables the framework to compare perceptions and perceptions in the real world to portray skin injury patterns . Deep convolution neural networks are used to classify the skin lesions on a pixel to pixel manner. The most prevalent cancers are determined from the binary classification of benign seborrheic keratoses and keratinocyte carcinomas.
Malignant melanomas are identified from the end to end training approach to prevent the deadliest disease . A deep learning algorithm is employed to classify the types of cutaneous tumors. Twelve diseases are considered in the training phase. Microsoft ResNet-152 model is adopted to train and test the Asan dataset that contains skin lesion images. The results are improved by adding age and ethnics as additional features . Deep residual networks are used to train the model to automatically analyze the skin lesions using a large dataset of dermoscopic images. The ResDNN exploited the hidden features such as different size and shapes of the lesions. It also considered the hair and skin colors for melanoma analysis.
The CNN was composed of multiple elements with in which every half considers an equivalent image at a distinct resolution. Next, Associate in Nursing finish layer combines the outputs from multiple resolutions into one layer. The CNN identifies interactions across totally different image resolutions and also the weight parameters square measure is optimized by end-to-end learning .
An image with different resolution is provided to the CNN to work in different parts during training. An end-to-end learning approach is adopted to optimize the image that has varied resolutions. Dermofit image library is utilized for this approach and it attained a classification accuracy of 79.5% . Two CNN models, namely, CaffeNet and VGGNet are adopted to classify images in the DermQuest dataset that contained around 6500 images. The training classes of the architecture involved finely defined features. The parameters are fined tuned by adjusting the weight parameters and it attained only 50.275 of accuracy .
The major challenge in melanoma diagnosis is that the public datasets do not contain the entire population of the world. It has only skin images of light skinned people such as Australians, Europeans and Americans. The dataset should also contain important features such as skin type, anatomic location, race, age and gender. To perform diagnosis of dark skinned people, CNN should be trained with dark skin images from clinical dataset. 
MATERIALS AND METHODS:
1. Benevolent/Benign lesions
2. Malevolent/Malignant lesions
3. Non-neoplastic lesions
RESULTS AND DISCUSSION:
· Choosing an intermediate values will result in equal chance of predicting the person with cancer or without cancer
· Choosing a lower value will definitively predict the person who has cancer and will be sent for more tests
1. Nami N, Giannini E, Burroni M, Fimiani M, Rubegni P. Teledermatology: state-of-the-art and future perspectives. Expert Review of Dermatology. 2012 1;7(1):1-3.
2. Fabbrocini G, Triassi M, Mauriello MC, Torre G, Annunziata MC, De Vita V, Pastore F, D’Arco V, Monfrecola G. Epidemiology of skin cancer: role of some environmental factors. Cancers. 2010;2(4):1980-1989.
3. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115.
4. Ali AR, Deserno TM. A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data. InMedical Imaging 2012: Image Perception, Observer Performance, and Technology Assessment 2012 ; 8318 - 83181I)
5. Kittler H, Pehamberger H, Wolff K, Binder M. Diagnostic accuracy of dermoscopy. The lancet oncology. 20020;3(3):159-65.
6. Ali AR, Deserno TM. A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data. InMedical Imaging 2012: Image Perception, Observer Performance, and Technology Assessment 2012 ; 8318 - 83181.
7. Fabbrocini G, De Vita V, Pastore F, D'Arco V, Mazzella C, Annunziata MC, Cacciapuoti S, Mauriello MC, Monfrecola A. Teledermatology: from prevention to diagnosis of nonmelanoma and melanoma skin cancer. International journal of telemedicine and applications. 2011
8. Menegola A, Fornaciali M, Pires R, Bittencourt FV, Avila S, Valle E. Knowledge transfer for melanoma screening with deep learning. In2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) 2017 ; 18: 297-300.
9. Pomponiu V, Nejati H, Cheung NM. Deepmole: Deep neural networks for skin mole lesion classification. In2016 IEEE International Conference on Image Processing (ICIP) 2016 ;25: 2623-2627.
10. Codella N, Cai J, Abedini M, Garnavi R, Halpern A, Smith JR. Deep learning, sparse coding, and SVM for melanoma recognition in dermoscopy images. InInternational workshop on machine learning in medical imaging 2015 Oct 5 (pp. 118-126). Springer, Cham.
11. Kawahara J, BenTaieb A, Hamarneh G. Deep features to classify skin lesions. In2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) 2016 : 1397-1400.
12. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017; 542(7639):115.
13. Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. Journal of Investigative Dermatology. 2018 1;138(7):1529-38.
14. Bi L, Kim J, Ahn E, Feng D. Automatic skin lesion analysis using large-scale dermoscopy images and deep residual networks. arXiv preprint arXiv:1703.04197. 2017 :1-2.
15. Kawahara J, Hamarneh G. Multi-resolution-tract CNN with hybrid pretrained and skin-lesion trained layers. InInternational Workshop on Machine Learning in Medical Imaging 2016 : 164-171.
16. Navarrete-Dechent C, Dusza SW, Liopyris K, Marghoob AA, Halpern AC, Marchetti MA. Automated dermatological diagnosis: hype or reality?. The Journal of investigative dermatology. 2018;138(10):2277.
17. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging. 2016 11;35(5):1285-98.
18. Sharif Razavian A, Azizpour H, Sullivan J, Carlsson S. CNN features off-the-shelf: an astounding baseline for recognition. InProceedings of the IEEE conference on computer vision and pattern recognition workshops :. 806-813.
19. Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A. Learning deep features for scene recognition using places database. InAdvances in neural information processing systems 2014 : 487-495.
20. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. InProceedings of the IEEE conference on computer vision and pattern recognition 2016 : 2818-2826.
21. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. InProceedings of the IEEE conference on computer vision and pattern recognition 2015 : 1-9).
22. Lin M, Chen Q, Yan S. Network in network. arXiv preprint arXiv:1312.4400. 2013 16.
23. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. InThirty-First AAAI Conference on Artificial Intelligence 2017 12.
Accepted on 18.10.2019 © RJPT All right reserved
Research J. Pharm. and Tech 2020; 13(3):1248-1252.