IJIGSP Vol. 18, No. 1, 8 Feb. 2026
Cover page and Table of Contents: PDF (size: 1343KB)
PDF (1343KB), PP.13-32
Views: 0 Downloads: 0
High Grade Serous Cancer (HGSC), Histopathological Images, Denoising, Dual Attention Mechanism, Fuzzy Logic, U-Net
High-quality image reconstruction plays an important part in histopathological image analysis, especially for HGSOC diagnosis, because of a great deal of fine cellular structures that should be clearly visible. In real scenarios, however, medical images usually face a series of problems due to acquisition limitations, which might obscure some significant diagnostic features. This work presents FUDA-NET, a new image denoising framework that enhances noisy histopathological images while maintaining the integrity of structure and texture. The architecture is based on an improved U-Net design integrated with a dual attention mechanism- Channel and Spatial attention, which enables the network to selectively emphasize meaningful features and suppress background noise. Additionally, a fuzzy logic layer is incorporated at the bottleneck to handle uncertainty and enhance contextual reasoning during feature extraction. This proposed FUDA-NET framework combines Mean Squared Error (MSE) and Structural Similarity Index Measures (SSIM) based loss function to ensure both pixel wise accuracy and perception similarity. Experiment conducted on 12,019 training images and 1188 testing images of High Grade Serous Ovarian Cancer, histopathological data set shows that FUDA-NET achieves superior denoising performance outperforming traditional and recent deep learning methods such as DnCNN, U-Net, U-Net with Attention and Noise2Noise in terms of PSNR, SSIM, MSE, MAE and FSIM. This approach contributes to improve visual clarity and diagnostic reliability in medical imaging.
Anandakumar K., Chandrasekar C., "Fuzzy-Enhanced U-Net with Dual Attention for Histopathological Image Analysis in High Grade Serous Ovarian Cancer", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.18, No.1, pp. 13-32, 2026. DOI:10.5815/ijigsp.2026.01.02
[1]Moschetta, M., George, A., Kaye, S. B., & Banerjee, S. (2016). BRCA somatic mutations and epigenetic BRCA modifications in serous ovarian cancer. Annals of Oncology, 27(8), 1449-1455.
[2]Stewart, C., Ralyea, C., & Lockwood, S. (2019, April). Ovarian cancer: an integrated review. In Seminars in oncology nursing (Vol. 35, No. 2, pp. 151-156). WB Saunders.
[3]Hamad, M. M., Aktar, S., Uddin, M. J., Rahman, T., Alyami, S. A., Al-Ashhab, S., & Moni, M. A. (2022). Early-stage detection of ovarian cancer based on clinical data using machine learning approaches. Journal of Personalized Medicine, 12(8), 1211.
[4]Boehm, K. M., Aherne, E. A., Ellenson, L., Nikolovski, I., Alghamdi, M., Vázquez-García, I., & Shah, S. P. (2022). Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer. Nature Cancer, 3(6), 723–733.
[5]Cho, H. W., Cho, H., Kim, J., Kim, S., Lee, S., Song, J. Y., & Lee, J. K. (2024). Pelvic ultrasound-based deep learning models for accurate diagnosis of ovarian cancer: Retrospective multicenter study.
[6]Farahani, H., Boschman, J., Farnell, D., Darbandsari, A., Zhang, A., Ahmadvand, P., & Bashashati, A. (2022). Deep learning-based histotype diagnosis of ovarian carcinoma whole-slide pathology images. Modern Pathology, 35(12), 1983–1990.
[7]Zhang, R., Siu, M. K., Ngan, H. Y., & Chan, K. K. (2022). Molecular biomarkers for the early detection of ovarian cancer. International Journal of Molecular Sciences, 23(19), 12041.
[8]Zamwar, U. M., & Anjankar, A. P. (2022). Aetiology, epidemiology, histopathology, classification, detailed evaluation, and treatment of ovarian cancer. Cureus, 14(10).
[9]Ghoniem, R. M., Algarni, A. D., Refky, B., & Ewees, A. A. (2021). Multi-modal evolutionary deep learning model for ovarian cancer diagnosis. Symmetry, 13(4), 643. https://doi.org/10.3390/sym13040643
[10]Guo, L. Y., Wu, A. H., Wang, Y. X., Zhang, L. P., Chai, H., & Liang, X. F. (2020). Deep learning-based ovarian cancer subtypes identification using multi-omics data. BioData Mining, 13, 1–12.
[11]Jung, Y., Kim, T., Han, M. R., Kim, S., Kim, G., Lee, S., & Choi, Y. J. (2022). Ovarian tumor diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Scientific Reports, 12(1), 17024.
[12]Kim, M., Chen, C., Wang, P., Mulvey, J. J., Yang, Y., Wun, C., & Heller, D. A. (2022). Detection of ovarian cancer via the spectral fingerprinting of quantum-defect-modified carbon nanotubes in serum by machine learning. Nature Biomedical Engineering, 6(3), 267–275.
[13]Octaviani, T. L., Rustam, Z., & Siswantining, T. (2019, June). Ovarian cancer classification using Bayesian logistic regression. IOP Conference Series: Materials Science and Engineering, 546(5), 052049.
[14]Saida, T., Mori, K., Hoshiai, S., Sakai, M., Urushibara, A., Ishiguro, T., & Nakajima, T. (2022). Diagnosing ovarian cancer on MRI: A preliminary study comparing deep learning and radiologist assessments. Cancers, 14(4), 987.
[15]Schwartz, D., Sawyer, T. W., Thurston, N., Barton, J., & Ditzler, G. (2022). Ovarian cancer detection using optical coherence tomography and convolutional neural networks. Neural Computing and Applications, 34(11), 8977–8987.
[16]Jung, Y., Kim, T., Han, M. R., Kim, S., Kim, G., Lee, S., & Choi, Y. J. (2022). Ovarian tumor diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Scientific Reports, 12(1), 17024.
[17]Xu, X., Kapse, S., Gupta, R., & Prasanna, P. (2023, October). Vit-dae: Transformer-driven diffusion autoencoder for histopathology image analysis. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 66-76). Cham: Springer Nature Switzerland.
[18]Ghoniem, R. M., Algarni, A. D., Refky, B., & Ewees, A. A. (2021). Multi-modal evolutionary deep learning model for ovarian cancer diagnosis. Symmetry, 13(4), 643.
[19]Sengupta, D., Ali, S. N., Bhattacharya, A., Mustafi, J., Mukhopadhyay, A., & Sengupta, K. (2022). A deep hybrid learning pipeline for accurate diagnosis of ovarian cancer based on nuclear morphology. PloS one, 17(1), e0261181
[20]Chen, H., Yang, B. W., Qian, L., Meng, Y. S., Bai, X. H., Hong, X. W., ... & Feng, W. W. (2022). Deep learning prediction of ovarian malignancy at US compared with O-RADS and expert assessment. Radiology, 304(1), 106-113
[21]Ziyambe, B., Yahya, A., Mushiri, T., Tariq, M. U., Abbas, Q., Babar, M., ... & Jabbar, S. (2023). A deep learning framework for the prediction and diagnosis of ovarian cancer in pre-and post-menopausal women. Diagnostics, 13(10), 1703.
[22]Mohammed, M., Mwambi, H., Mboya, I. B., Elbashir, M. K., & Omolo, B. (2021). A stacking ensemble deep learning approach to cancer type classification based on TCGA data. Scientific reports, 11(1), 15626.
[23]Huttunen, M. J., Hassan, A., McCloskey, C. W., Fasih, S., Upham, J., Vanderhyden, B. C., ... & Murugkar, S. (2018). Automated classification of multiphoton microscopy images of ovarian tissue using deep learning. Journal of biomedical optics, 23(6), 066002-066002.
[24]Wu, M., Yan, C., Liu, H., & Liu, Q. (2018). Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks. Bioscience reports, 38(3), BSR20180289.
[25]Gao, Y., Zeng, S., Xu, X., Li, H., Yao, S., Song, K., ... & Gao, Q. (2022). Deep learning-enabled pelvic ultrasound images for accurate diagnosis of ovarian cancer in China: a retrospective, multicentre, diagnostic study. The Lancet Digital Health, 4(3), e179-e187.
[26]Adebayo, S. O., & Khan, A. H. (2025). Distributed Deep Learning for Medical Image Denoising with Data Obfuscation. 2025 IEEE 25th International Conference on Bioinformatics and Bioengineering (BIBE), 76–80. https://doi.org/10.1109/bibe66822.2025.00021
[27]Pfaff, L., Wagner, F., Vysotskaya, N., Thies, M., Maul, N., Mei, S., Wuerfl, T., & Maier, A. (2024). No-New-Denoiser: A Critical Analysis of Diffusion Models for Medical Image Denoising. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024, 568–578. https://doi.org/10.1007/978-3-031-72117-5_53
[28]Ullah, F., Kumar, K., Rahim, T., Khan, J., & Jung, Y. (2025). A new hybrid image denoising algorithm using adaptive and modified decision-based filters for enhanced image quality. Scientific Reports, 15(1). https://doi.org/10.1038/s41598-025-92283-3
[29]Beibei, W. (2024). Research on denoising and enhancing methods of medical images based on convolutional neural networks. Concurrency and Computation: Practice and Experience, 36(11). Portico. https://doi.org/10.1002/cpe.8016
[30]Singh, P. (2025). Understanding Medical Image Denoising, Enhancement, and Reconstruction. Biomedical Informatics and Smart Healthcare, 1(1), 35-39.10.62762/BISH.2025.966762
[31]Wang, W., Xia, J., Luo, G., Dong, S., Li, X., Wen, J., & Li, S. (2025). Diffusion model for medical image denoising, reconstruction and translation. Computerized Medical Imaging and Graphics, 124, 102593. https://doi.org/10.1016/j.compmedimag.2025.102593
[32]https://www.kaggle.com/datasets/sunilthite/ovarian-cancer-classification-dataset
[33]Nivetha, S., & Inbarani, H. H. (2022). Neighborhood rough neural network approach for COVID-19 image classification. Neural Processing Letters, 54(3), 1919-1941.
[34]Nivetha, S., & Inbarani, H. H. (2023). Novel Adaptive Histogram Binning-Based Lesion Segmentation for Discerning Severity in COVID-19 Chest CT Scan Images. International Journal of Sociotechnology and Knowledge Development (IJSKD), 15(1), 1-35.
[35]Nivetha, S., & Inbarani, H. H. (2023). Novel Hybrid Genetic Arithmetic Optimization for Feature Selection and Classification of Pulmonary Disease Images. International Journal of Sociotechnology and Knowledge Development (IJSKD), 15(1), 1-58.
[36]Nivetha, S., & Inbarani, H. H. (2023). Novel Architecture for Image Classification Based on Rough Set. International Journal of Service Science, Management, Engineering, and Technology (IJSSMET), 14(1), 1-38.
[37]Nivetha, S., & Hannah Inbarani, H. (2022). Classification Of Covid-19 CT Scan Images Using Novel Tolerance Rough Set Approach. In Machine Learning for Critical Internet of Medical Things: Applications and Use Cases (pp. 55-80). Cham: Springer International Publishing.
[38]Nivetha, S., & Hannah Inbarani, H. (2022, September). Automated Lesion Image Segmentation Based on Novel Histogram-Based K-Means Clustering Using Covid-19 Chest CT Images organized by “Congress on Intelligent System (CIS 2022)” held on 05.09.2022 to 06.09.2022.
[39]Nivetha, S., Inbarani, H. H.: Novel Hybrid Genetic Arithmetic Optimization for Feature Selection and Classification of Pulmonary Disease Images. International Journal of Sociotechnology and Knowledge Development (IJSKD), 15(1), 1-58 (2023).
[40]Nivetha, S., & Hannah Inbarani, H.: Automated Histogram Binning-Based Fuzzy K-Means Clustering for COVID-19 Chest CT Image Segmentation. In: Inventive Systems and Control: Proceedings of ICISC 2023, pp. 777-793. Singapore: Springer Nature Singapore. (2023).
[41]Nivetha, S., Anandakumar, K., Inbarani, H. H., & Khan, M. (2025). Explainable machine learning framework for gene expression-based biomarker identification and cancer classification using feature selection. Med. Data Min, 8(3), 19.
[42]Gautam, A., Pawar, A., Joshi, A., Tazi, S. N., Chaudhary, S., Hambarde, P., ... & Murala, S. (2025). Pureformer: Transformer-Based Image Denoising. In Proceedings of the Computer Vision and Pattern Recognition Conference (pp. 1441-1449).
[43]Tian, C., Zheng, M., Zuo, W., Zhang, S., Zhang, Y., & Lin, C. W. (2024). A cross transformer for image denoising. Information Fusion, 102, 102043.
[44]Anwar, A., Shullar, M., Nasir, A. A., Masood, M., & Anwar, S. (2025). TDiR: Transformer based Diffusion for Image Restoration Tasks. arXiv preprint arXiv:2506.20302.
[45]Wu, Z., Chen, Y., Xiong, J., Pan, X., & He, W. (2025). S2TDM: Spatial-spectral transformer-based diffusion model for hyperspectral image denoising. Geo-spatial Information Science, 1-18.
[46]Hao, F., Wu, J., Du, J., Wang, Y., & Xu, J. (2024). Dilated Strip Attention Network for Image Restoration. arXiv preprint arXiv:2407.18613.