IJIGSP Vol. 18, No. 1, 8 Feb. 2026
Cover page and Table of Contents: PDF (size: 614KB)
PDF (614KB), PP.1-12
Views: 0 Downloads: 0
Dentistry, Object Detection, Source-Free Domain Adaptation (SFDA), Image Segmentation, Computed Tomography
The aim of this study is evaluating the efficacy of combining source-free domain adaptation techniques with quantitative uncertainty assessment, aimed at enhancing image segmentation in new domains. The research employs an uncertainty-aware source-free domain adaptation strategy, encompassing the generation of pseudo-labels, their filtration based on entropy and variance of predictions, alongside the involvement of an Exponential Moving Average (EMA) teacher and a tailored loss function. For validation purposes, segmentation models pre-trained on one image dataset were subsequently adapted to another dataset. A comprehensive comparative and ablation analysis, coupled with the visualization of the correlation between segmentation errors and the degree of uncertainty, was conducted. The ablation study corroborated that the complete configuration with the EMA teacher yielded the most favorable results. Data visualization elucidated a direct correlation between high uncertainty and an increased risk for segmentation errors. The findings of this study substantiate the viability of employing uncertainty assessment within the source-free domain adaptation process for clinical dentistry. The proposed methodology facilitates the adaptation of models to new conditions without necessitating retraining, thereby rendering the decision-making process more transparent. Future studies should consider assessing the efficacy of the proposed approach in additional dental visualization tasks, such as implant planning or orthodontic analysis.
Sviatoslav Dziubenko, Tymur Dovzhenko, Andriy Kyrylyuk, Kamila Storchak, "Uncertainty-Aware Source-Free Domain Adaptation for Dental CBCT Image Segmentation", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.18, No.1, pp. 1-12, 2026. DOI:10.5815/ijigsp.2026.01.01
[1]S. Mansouri, “Application of neural networks in the medical field,” Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications, vol. 14, no. 1, pp. 69–81, 2023, DOI: 10.58346/JOWUA.2023.I1.006.
[2]S. Patel, “An overview and application of deep convolutional neural networks for medical image segmentation,” 2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS), IEEE (India), February 2-4, 2023, pp. 722–728.
[3]M. A. Abdou, “Literature review: Efficient deep neural networks techniques for medical image analysis,” Neural Computing and Applications, vol. 34, no. 8, pp. 5791–5812, 2022, DOI: 10.1007/s00521-022-06960-9.
[4]P. Celard, E. L. Iglesias, J. M. Sorribes-Fdez, R. Romero, A. S. Vieira, and L. Borrajo, “A survey on deep learning applied to medical images: from simple artificial neural networks to generative models,” Neural Computing and Applications, vol. 35, no. 3, pp. 2291–2323, 2023, DOI: 10.1007/s00521-022-07953-4.
[5]P. K. Mall, P. K. Singh, S. Srivastav, V. Narayan, M. Paprzycki, T. Jaworska, and M. Ganzha, “A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities,” Healthcare Analytics, vol. 4, art. 100216, 2023, DOI: 10.1016/j.health.2023.100216.
[6]A. Ossowska, A. Kusiak, and D. Świetlik, “Artificial intelligence in dentistry – Narrative review,” International Journal of Environmental Research and Public Health, vol. 19, no. 6, art. 3449, 2022, DOI: 10.3390/ijerph19063449.
[7]P. Agrawal, P. Nikhade, and P. P. Nikhade, “Artificial intelligence in dentistry: Past, present, and future,” Cureus, vol. 14, no. 7, art. e27405, 2022, DOI: 10.7759/cureus.27405.
[8]F. Carrillo-Perez, O. E. Pecho, J. C. Morales, R. D. Paravina, A. Della Bona, R. Ghinea, R. Pulgar, M. del Mar Pérez, and L. J. Herrera, “Applications of artificial intelligence in dentistry: A comprehensive review,” Journal of Esthetic and Restorative Dentistry, vol. 34, no. 1, pp. 259–280, 2022, DOI: 10.1111/jerd.12844.
[9]L. M. Leo and T. K. Reddy, “Learning compact and discriminative hybrid neural network for dental caries classification,” Microprocessors and Microsystems, vol. 82, art. 103836, 2021, DOI: 10.1016/j.micpro.2021.103836.
[10]H. R. Choi, T. S. Siadari, J. E. Kim, K. H. Huh, W. J. Yi, S. S. Lee, and M. S. Heo, “Automatic detection of teeth and dental treatment patterns on dental panoramic radiographs using deep neural networks,” Forensic Sciences Research, vol. 7, no. 3, pp. 456–466, 2022, DOI: 10.1080/20961790.2022.2034714.
[11]A. Imak, A. Celebi, K. Siddique, M. Turkoglu, A. Sengur, and I. Salam, “Dental caries detection using score-based multi-input deep convolutional neural network,” IEEE Access, vol. 10, pp. 18320–18329, 2022, DOI: 10.1109/ACCESS.2022.3150358.
[12]D. Saini, R. Jain, and A. Thakur, “Dental caries early detection using convolutional neural network for tele dentistry”, 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), IEEE (India), 19-20 March, 2021, pp. 958–963, DOI: 10.1109/ICACCS51430.2021.9442001.
[13]J. Y. Cha, H. I. Yoon, I. S. Yeo, K. H. Huh, and J. S. Han, “Peri-implant bone loss measurement using a region-based convolutional neural network on dental periapical radiographs,” Journal of Clinical Medicine, vol. 10, no. 5, art. 1009, 2021, DOI: 10.3390/jcm10051009.
[14]N. Musri, B. Christie, S. J. A. Ichwan, and A. Cahyanto, “Deep learning convolutional neural network algorithms for the early detection and diagnosis of dental caries on periapical radiographs: A systematic review,” Imaging Science in Dentistry, vol. 51, no. 3, pp. 237–245, 2021, DOI: 10.5624/isd.20210074.
[15]J. Wang, J. Dou, J. Han, G. Li, and J. Tao, “A population-based study to assess two convolutional neural networks for dental age estimation,” BMC Oral Health, vol. 23, no. 1, art. 109, 2023, DOI: 10.1186/s12903-023-02817-2.
[16]J. S. Kumar, S. Anuar, and N. H. Hassan, “Transfer Learning based Performance Comparison of the Pre-Trained Deep Neural Networks,” International Journal of Advanced Computer Science and Applications, vol. 13, no. 1, pp. 797–805, 2022. DOI: 10.14569/IJACSA.2022.0130193.
[17]L. Alzubaidi, J. Santamaría, M. Manoufali, B. Mohammed, M. A. Fadhel, J. Zhang, and Y. Duan, “MedNet: pre-trained convolutional neural network model for the medical imaging tasks,” arXiv preprint arXiv:2110.06512, 2021, DOI: 10.48550/arXiv.2110.06512.
[18]M. A. Arshed, S. Mumtaz, M. Ibrahim, S. Ahmed, M. Tahir, and M. Shafi, “Multi-class skin cancer classification using vision transformer networks and convolutional neural network-based pre-trained models,” Information, vol. 14, no. 7, art. 415, 2023, DOI: 10.3390/info14070415.
[19]N. Spolaôr, H. D. Lee, A. I. Mendes, C. V. Nogueira, A. R. S. Parmezan, W. S. R. Takaki, C. S. R. Coy, F. C. Wu, and R. Fonseca-Pinto, “Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets,” Multimedia Tools and Applications, vol. 83, no. 9, pp. 27305–27329, 2024, DOI: 10.1007/s11042-023-16529-w.
[20]O. El Gannour, S. Hamida, B. Cherradi, M. Al-Sarem, A. Raihani, F. Saeed, and M. Hadwan, “Concatenation of pre-trained convolutional neural networks for enhanced COVID-19 screening using transfer learning technique,” Electronics, vol. 11, no. 1, art. 103, 2021, DOI: 10.3390/electronics11010103.
[21]J. V. Shanmugam, B. Duraisamy, B. C. Simon, and P. Bhaskaran, “Alzheimer’s disease classification using pre-trained deep networks,” Biomedical Signal Processing and Control, vol. 71, no. Part. B, art. 103217, 2022, DOI: 10.1016/j.bspc.2021.103217.
[22]A. Ahmed, “Medical image classification using pre-trained convolutional neural networks and support vector machine,” International Journal of Computer Science & Network Security, vol. 21, no. 6, pp. 1–6, 2021, DOI: 10.22937/IJCSNS.2021.21.6.1.
[23]S. Siluvai, V. Narayanan, V. S. Ramachandran, and V. R. Lazar, “Generative pre-trained transformer: Trends, applications, strengths and challenges in dentistry: A systematic review,” Healthcare Informatics Research, vol. 31, no. 2, pp. 189–199, 2025. DOI: 10.4258/hir.2025.31.2.189.
[24]R. A. Lashaki, Z. Raeisi, N. Razavi, M. Goodarzi, and H. Najafzadeh, “Optimized classification of dental implants using convolutional neural networks and pre-trained models with preprocessed data,” BMC Oral Health, vol. 25, no. 1, art. 535, 2025, DOI: 10.1186/s12903-025-05704-0.
[25]I. E. Ali, Y. Sumita, and N. Wakabayashi, “Advancing maxillofacial prosthodontics by using pre-trained convolutional neural networks: Image-based classification of the maxilla,” Journal of Prosthodontics, vol. 33, no. 7, pp. 645–654, 2024, DOI: 10.1111/jopr.13853.
[26]H. Guan and M. Liu, “Domain adaptation for medical image analysis: A survey,” IEEE Transactions on Biomedical Engineering, vol. 69, no. 3, pp. 1173–1185, 2021, DOI: 10.1109/TBME.2021.3117407.
[27]S. Kumari and P. Singh, “Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives,” Computers in Biology and Medicine, vol. 170, art. 107912, 2024, DOI: 10.1016/j.compbiomed.2023.107912.
[28]F. Wu and X. Zhuang, “Unsupervised domain adaptation with variational approximation for cardiac segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 12, pp. 3555–3567, 2021. DOI: 10.1109/TMI.2021.3090412.
[29]J. Wu, G. Wang, R. Gu, T. Lu, Y. Chen, W. Zhu, and S. Zhang, “UPL-SFDA: Uncertainty-aware pseudo label guided source-free domain adaptation for medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 42, no. 12, pp. 3932–3943, 2023, DOI: 10.1109/TMI.2023.3318364.
[30]M. A. Morid, A. Borjali, and G. Del Fiol, “A scoping review of transfer learning research on medical image analysis using ImageNet,” Computers in Biology and Medicine, vol. 128, art. 104115, 2021, DOI: 10.1016/j.compbiomed.2020.104115.
[31]Y. Wang, S. Nazir, and M. Shafiq, “An overview on analyzing deep learning and transfer learning approaches for health monitoring,” Computational and Mathematical Methods in Medicine, art. 5552743, 2021, DOI: 10.1155/2021/5552743.
[32]Z. Shaukat, Q. U. A. Farooq, S. Tu, C. Xiao, and S. Ali, “A state-of-the-art technique to perform cloud-based semantic segmentation using deep learning 3D U-Net architecture,” BMC Bioinformatics, vol. 23, art. 251, 2022, DOI: 10.1186/s12859-022-04794-9.
[33]Y. M. Wong, P. L. Yeap, A. L. K. Ong, J. K. L. Tuan, W. S. Lew, J. C. L. Lee, and H. Q. Tan, “Machine learning prediction of Dice similarity coefficient for validation of deformable image registration,” Intelligence-Based Medicine, vol. 10, art. 100163, 2024, DOI: 10.1016/j.ibmed.2024.100163.
[34]A. Celaya, B. Riviere, and D. Fuentes, “A generalized surface loss for reducing the Hausdorff distance in medical imaging segmentation,” arXiv preprint arXiv:2302.03868, 2023, DOI: 10.48550/arXiv.2302.03868.
[35]M. Yuana, B. Jie, R. Hane, J. Wange, Y. Zhanga, Z. Lif, J. Zhue, R. Zhange, Y. He, “Automatic segmentation of the midfacial bone surface from ultrasound images using deep learning methods,” International Journal of Oral and Maxillofacial Surgery, 2025, DOI: 10.1016/j.ijom.2025.01.012.