Comput Methods Programs Biomed. 2025 Sep 25;273:109081. doi: 10.1016/j.cmpb.2025.109081. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: Cutaneous melanoma remains the most lethal form of skin cancer. Although incurable at advanced stages, if diagnosed at an early, localized stage, the five-year survival rate is remarkably high. Recent advancements in artificial intelligence have paved the way for early skin lesion diagnosis, leveraging digital imaging processes into effective solutions. Most of these, however, use Machine Learning and Deep Learning techniques compartmentalized, without combining the produced predictions.
METHODS: This paper introduces MultiExCam, a novel multi approach and explainable architecture for skin cancer detection that integrates both machine and deep learning. Three heterogeneous data from three different techniques are used: dermatoscopic images, features extracted from deep learning techniques, and hand-crafted statistical features. A convolutional neural network is used for both deep feature extraction and initial classification, with the extracted features being combined with handcrafted ones to train four additional machine learning models. An advanced ensemble model, implemented as a Feed Forward Neural Network with gating and attention mechanism, produces the final classification. To enhance interpretability, the architecture employs GradCAM for visualizing critical regions in input images and SHAP for evaluating the contribution of individual features to predictions.
RESULTS: MultiExCam demonstrates robust performance across three diverse datasets (HAM10000, ISIC, MED-NODE), achieving AUC scores of 97%, 91%, and 98% respectively, with corresponding F1-scores of 92%, 87%, and 94%. Comprehensive ablation studies validate the importance of the preprocessing pipeline and ensemble integration, with the hybrid approach consistently outperforming baseline deep learning models by 1-3 percentage points. Unlike existing compartmentalized hybrid solutions, MultiExCam’s adaptive ensemble architecture learns personalized decision strategies for individual lesions, mimicking expert dermatological workflows that integrate multiple evidence sources. The explainability analysis reveals clinically meaningful activation patterns corresponding to established diagnostic criteria including asymmetry, border irregularity, and color variation.
CONCLUSION: MultiExCam establishes a new paradigm for AI-assisted dermatological diagnosis by demonstrating that true hybrid integration of deep learning and machine learning, combined with comprehensive explainability techniques, can achieve both superior diagnostic performance and clinical interpretability. The architecture’s ability to provide accurate classifications while explaining prediction rationale addresses critical requirements for medical AI adoption, offering a promising foundation for clinical decision support systems in melanoma detection.
PMID:41021995 | DOI:10.1016/j.cmpb.2025.109081