B.LAKSHMI DHEVI, T. PANDISELVI, M. SONY JENUSHA, R. ASHOK, R.RAJ PRABU, P. MUTHUMARI, S. ALWYN RAJIV
DOI: https://doi.org/Skin cancer remains one of the most prevalent and potentially life-threatening forms of cancer worldwide, with early and accurate diagnosis being critical for effective treatment and improved patient outcomes. Traditional diagnostic methods, including visual inspection and dermoscopic evaluation, are often subject to human error and variability, emphasizing the need for robust computational approaches. This study proposes a novel framework for skin cancer diagnosis utilizing Generative Stacked Recurrent Neural Networks (GSRNNs), specifically designed for advanced medical image processing. The generative capability of the model enables it to synthesize high-fidelity feature representations from dermoscopic images, capturing both spatial and sequential patterns that are crucial for distinguishing between malignant and benign lesions. The proposed architecture comprises multiple recurrent layers stacked hierarchically, enabling the network to learn complex temporal dependencies within image feature sequences. Each layer leverages gated recurrent units (GRUs) to mitigate vanishing gradient issues, ensuring efficient propagation of critical diagnostic information across the network. A generative pre-training phase enhances the model’s capability to recognize subtle morphological variations in skin lesions, effectively augmenting the training dataset and addressing challenges associated with limited annotated medical images. The model is evaluated using a combination of benchmark dermoscopic datasets and real-world clinical images, with performance metrics including accuracy, sensitivity, specificity, F1-score, and area under the receiver operating characteristic curve (AUC). Experimental results demonstrate that the GSRNN-based framework achieves superior diagnostic accuracy compared to conventional convolutional neural network (CNN) and standard recurrent neural network (RNN) approaches. The generative pre-training enables the network to generalize effectively to unseen data, reducing misclassification rates, particularly in complex or ambiguous cases. Additionally, the model exhibits robust performance across varying skin types, lesion sizes, and imaging conditions, highlighting its potential applicability in diverse clinical settings. The proposed methodology not only facilitates rapid and reliable skin cancer detection but also provides interpretable feature representations that can assist clinicians in decision-making processes. In conclusion, this research establishes Generative Stacked Recurrent Neural Networks as a powerful tool for medical image analysis, particularly in the early detection of skin cancer. By combining generative learning with sequential feature extraction, the framework offers a promising approach to improving diagnostic accuracy, reducing human error, and enhancing patient care in dermatology.
