Using Dermoscopic images and texture feature for Skin Cancer Detection
Abstract
Deep learning has shown considerable promise for detecting skin cancer, particularly in dermoscopic pictures. In this study, a two-stage classification strategy has been created to improve diagnosis accuracy using the PH2 and ISIC 2019 datasets. First, the transfer learning with VGG19 and EfficientNet has been employed to extract deep features from images. These data were then integrated with 34 texture-based features and fed into a CNN, which enabled the model to identify between benign and malignant instances with 99.33% validation accuracy. For images determined as malignant, a second step was taken to determine the precise type of skin cancer among four groups. We used DenseNet121 for feature extraction, combined its deep features with texture descriptors, then put them through another CNN. This model achieved 91.20% validation accuracy. The findings demonstrate the efficacy of combining transfer learning and texture analysis, presenting a viable strategy to accurate and automated skin cancer classification.
Keywords