An Effective Semi-Supervised Feature Extraction Model with Reduced Architectural Complexity for Image Forgery Classification

PDF (1661KB), PP.33-50

Views: 0 Downloads: 0

Author(s)

Jisha K. R. 1,* Sabna N. 2

1. APJ Abdul Kalam Technological University, Thiruvananthapuram, Department of Electronics and Communication Engineering, Rajagiri School of Engineering and Technology, Kakkanad, Kerala, India

2. Department of Electronics and Communication Engineering, Rajagiri School of Engineering and Technology, Kakkanad, Kerala, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2026.01.03

Received: 5 Jun. 2025 / Revised: 12 Aug. 2025 / Accepted: 6 Jan. 2026 / Published: 8 Feb. 2026

Index Terms

Encoder-Decoder Architecture, Feature Extraction, Generalizability, Image Forgery Detection, Reduced Architectural Complexity

Abstract

A generalized deep learning approach tracking image forgeries of any category with reduced architectural complexity, without compromising the performance is presented in this paper. A convolutional encoder-decoder architecture-based image reconstruction model is framed to extract all the pertinent information from the images. Performance comparison of similar networks constructed with varying architectural complexity led to the selection of this design. The best reconstruction feature extractor showed faster convergence and improved accuracy, as observed from the training and validation performance curves. Dimensionally compressed information from the reconstruction model is utilized by dense layers and further classified. Experimenting with forgery datasets inclusive of different forgery types ensured the generalizability of the model. In comparison with the reconstruction models adopting transfer learning in the encoder side utilizing MobileNet, ResNet 50, and VGG 19, the proposed model exhibited competitive and consistently improved mean Precision and F1-score performance across multiple datasets, as validated through multi-seed experimentation. Additionally, with the reduced architecture, the proposed model performed on par than the state-of-the-art approaches against which it was compared.

Cite This Paper

Jisha K. R., Sabna N., "An Effective Semi-Supervised Feature Extraction Model with Reduced Architectural Complexity for Image Forgery Classification", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.18, No.1, pp. 33-50, 2026. DOI:10.5815/ijigsp.2026.01.03

Reference

[1]S. Walia and K. Kumar, “Digital image forgery detection: a systematic scrutiny,” Sep. 03, 2019, Taylor and Francis Ltd. doi: 10.1080/00450618.2018.1424241.
[2]S. Walia and K. Kumar, “An eagle-eye view of recent digital image forgery detection methods,” in Communications in Computer and Information Science, Springer Verlag, 2018, pp. 469–487. doi: 10.1007/978-981-10-8660-1_36.
[3]G. Kumar and P. K. Bhatia, “A detailed review of feature extraction in image processing systems,” in International Conference on Advanced Computing and Communication Technologies, ACCT, Institute of Electrical and Electronics Engineers Inc., 2014, pp. 5–12. doi: 10.1109/ACCT.2014.74.
[4]K. B. Meena and V. Tyagi, “A hybrid copy-move image forgery detection technique based on Fourier-Mellin and scale invariant feature transforms,” Multimed. Tools Appl., vol. 79, no. 11–12, pp. 8197–8212, Mar. 2020, doi: 10.1007/s11042-019-08343-0.
[5]L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J. Big Data, vol. 8, no. 1, Dec. 2021, doi: 10.1186/s40537-021-00444-8.
[6]N. B. A. Warif et al., “Copy-move forgery detection: Survey, challenges and future directions,” Nov. 01, 2016, Academic Press. doi: 10.1016/j.jnca.2016.09.008.
[7]R. Salloum, Y. Ren, and C.-C. J. Kuo, “Image Splicing Localization Using A Multi-Task Fully Convolutional Network (MFCN),” Sep. 2017, doi: 10.1016/j.jvcir.2018.01.010.
[8]A. H. Khalil, A. Z. Ghalwash, H. A. G. Elsayed, G. I. Salama, and H. A. Ghalwash, “Enhancing Digital Image Forgery Detection Using Transfer Learning,” IEEE Access, vol. 11, pp. 91583–91594, 2023, doi: 10.1109/ACCESS.2023.3307357.
[9]Y. Ji, H. Zhang, Z. Zhang, and M. Liu, “CNN-based encoder-decoder networks for salient object detection: A comprehensive review and recent advances,” Inf. Sci. (N Y)., vol. 546, pp. 835–857, Feb. 2021, doi: 10.1016/j.ins.2020.09.003.
[10]N. K. Rathore, N. K. Jain, P. K. Shukla, U. S. Rawat, and R. Dubey, “Image Forgery Detection Using Singular Value Decomposition with Some Attacks,” National Academy Science Letters, vol. 44, no. 4, pp. 331–338, Aug. 2021, doi: 10.1007/s40009-020-00998-w.
[11]N. Krawetz, “A Picture’s Worth... Digital Image Analysis and Forensics,” 2007. [Online]. Available: www.hackerfactor.com
[12]B. Mahdian and S. Saic, “Using noise inconsistencies for blind image forensics,” Image Vis. Comput., vol. 27, no. 10, pp. 1497–1503, Sep. 2009, doi: 10.1016/j.imavis.2009.02.001.
[13]P. Ferrara, T. Bianchi, A. De Rosa, and A. Piva, “Image forgery localization via fine-grained analysis of CFA artifacts,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 5, pp. 1566–1577, 2012, doi: 10.1109/TIFS.2012.2202227.
[14]C. Lin et al., “Copy-move forgery detection using combined features and transitive matching.”
[15]Y. Rodriguez-Ortega, D. M. Ballesteros, and D. Renza, “Copy-move forgery detection (Cmfd) using deep learning for image and video forensics,” J. Imaging, vol. 7, no. 3, Mar. 2021, doi: 10.3390/jimaging7030059.
[16]N. Goel, S. Kaur, and R. Bala, “Dual branch convolutional neural network for copy move forgery detection,” IET Image Process., vol. 15, no. 3, pp. 656–665, Feb. 2021, doi: 10.1049/ipr2.12051.
[17]N. Krishnaraj, B. Sivakumar, R. Kuppusamy, Y. Teekaraman, and A. R. Thelkar, “Design of Automated Deep Learning-Based Fusion Model for Copy-Move Image Forgery Detection,” Comput. Intell. Neurosci., vol. 2022, 2022, doi: 10.1155/2022/8501738.
[18]T. M. Mohammed et al., “Boosting image forgery detection using resampling features and Copy-move analysis,” in IS and T International Symposium on Electronic Imaging Science and Technology, Society for Imaging Science and Technology, 2018. doi: 10.2352/ISSN.2470-1173.2018.07.MWSF-118.
[19]J. H. Bappy, A. K. Roy-Chowdhury, J. Bunk, L. Nataraj, and B. S. Manjunath, “Exploiting Spatial Structure for Localizing Manipulated Image Regions.”
[20]P. Dixit and S. Silakari, “Deep Learning Algorithms for Cybersecurity Applications: A Technological and Status Review,” Feb. 01, 2021, Elsevier Ireland Ltd. doi: 10.1016/j.cosrev.2020.100317.
[21]F. Guillaro, D. Cozzolino, A. Sud, N. Dufour, and L. Verdoliva, “TruFor: Leveraging all-round clues for trustworthy image forgery detection and localization.” [Online]. Available: https://grip-unina.github.io/TruFor/.
[22]Y. Wu, W. Abdalmageed, and P. Natarajan, “ManTra-Net: Manipulation Tracing Network For Detection And Localization of Image Forgeries With Anomalous Features.”
[23]Y. Wu, W. Abd-Almageed, and P. Natarajan, “BusterNet: Detecting copy-move image forgery with source/target localization,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11210 LNCS, pp. 170–186, 2018, doi: 10.1007/978-3-030-01231-1_11/FIGURES/7.
[24]S. Bibi, A. Abbasi, I. U. Haq, S. W. Baik, and A. Ullah, “Digital Image Forgery Detection Using Deep Autoencoder and CNN Features,” Human-centric Computing and Information Sciences, vol. 11, 2021, doi: 10.22967/HCIS.2021.11.032.
[25]E. U. H. Qazi, T. Zia, and A. Almorjan, “Deep Learning-Based Digital Image Forgery Detection System,” Applied Sciences (Switzerland), vol. 12, no. 6, Mar. 2022, doi: 10.3390/app12062851.
[26]A. Ahmad Aminu, N. Nnanna Agwu, and A. Steve, “Detection and Localization of Image Tampering using Deep Residual UNET with Stacked Dilated Convolution,” IJCSNS International Journal of Computer Science and Network Security, vol. 21, no. 9, 2021, doi: 10.22937/IJCSNS.2021.21.9.28.
[27]Abhishek and N. Jindal, “Copy move and splicing forgery detection using deep convolution neural network, and semantic segmentation,” Multimed. Tools Appl., vol. 80, no. 3, pp. 3571–3599, Jan. 2021, doi: 10.1007/s11042-020-09816-3.
[28]K. D. Kadam, S. Ahirrao, K. Kotecha, and S. Sahu, “Detection and Localization of Multiple Image Splicing Using MobileNet V1,” IEEE Access, vol. 9, pp. 162499–162519, 2021, doi: 10.1109/ACCESS.2021.3130342.
[29]F. Z. El Biach, I. Iala, H. Laanaya, and K. Minaoui, “Encoder-decoder based convolutional neural networks for image forgery detection,” Multimed. Tools Appl., vol. 81, no. 16, pp. 22611–22628, Jul. 2022, doi: 10.1007/s11042-020-10158-3.
[30]S. S. Ali, I. I. Ganapathi, N. S. Vu, S. D. Ali, N. Saxena, and N. Werghi, “Image Forgery Detection Using Deeplearning by Recompressing Images,” Electronics (Switzerland), vol. 11, no. 3, Feb. 2022, doi: 10.3390/electronics11030403.
[31]P. Zhou, X. Han, V. I. Morariu, and L. S. Davis, “Learning Rich Features for Image Manipulation Detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Dec. 2018, pp. 1053–1061. doi: 10.1109/CVPR.2018.00116.
[32]H. Li, W. Luo, X. Qiu, and J. Huang, “Image Forgery Localization via Integrating Tampering Possibility Maps,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 5, pp. 1240–1252, May 2017, doi: 10.1109/TIFS.2017.2656823.
[33]H. Chen, C. Chang, Z. Shi, and Y. Lyu, “Hybrid features and semantic reinforcement network for image forgery detection,” in Multimedia Systems, Springer Science and Business Media Deutschland GmbH, Apr. 2022, pp. 363–374. doi: 10.1007/s00530-021-00801-w.
[34]J. H. Bappy, C. Simons, L. Nataraj, B. S. Manjunath, and A. K. Roy-Chowdhury, “Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries,” Mar. 2019, doi: 10.1109/TIP.2019.2895466.
[35]G. Mazaheri, N. Chowdhury Mithun, J. H. Bappy, and A. K. Roy-Chowdhury, “A Skip Connection Architecture for Localization of Image Manipulations.”
[36]B. Yang, Z. Li, and T. Zhang, “A real-time image forensics scheme based on multi-domain learning,” in Journal of Real-Time Image Processing, Springer, Feb. 2020, pp. 29–40. doi: 10.1007/s11554-019-00893-8.
[37]F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi, “A Full-Image Full-Resolution End-to-End-Trainable CNN Framework for Image Forgery Detection,” Sep. 2019, [Online]. Available: http://arxiv.org/abs/1909.06751
[38]Y. Rao, J. Ni, and H. Xie, “Multi-semantic CRF-based attention model for image forgery detection and localization,” Signal Processing, vol. 183, Jun. 2021, doi: 10.1016/j.sigpro.2021.108051.
[39]V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” Mar. 2016, [Online]. Available: http://arxiv.org/abs/1603.07285
[40]J. Bjorck, C. Gomes, B. Selman, and K. Q. Weinberger, “Understanding Batch Normalization.”
[41]F. Agostinelli, M. Hoffman, P. Sadowski, and P. Baldi, “Learning Activation Functions to Improve Deep Neural Networks,” Dec. 2014, [Online]. Available: http://arxiv.org/abs/1412.6830
[42]H. Gholamalinezhad and H. Khosravi, “Pooling Methods in Deep Neural Networks, a Review.”
[43]N. Ketkar, “Introduction to Keras,” in Deep Learning with Python, Apress, 2017, pp. 97–111. doi: 10.1007/978-1-4842-2766-4_7.
[44]K. Janocha and W. M. Czarnecki, “On Loss Functions for Deep Neural Networks in Classification,” Feb. 2017, [Online]. Available: http://arxiv.org/abs/1702.05659
[45]S. Ucliimurat, Y. Hamamotot, and S. Tomitas, “On the Effect of the Nonlinearity of the Sigmoid Function in Artificial Neural Network Classifiers.”
[46]K. Duan, S. S. Keerthi, W. Chu, S. Krishnaj Shevade, and A. N. Poo, “Multi-category Classification by Soft-Max Combination of Binary Classifiers.”
[47]J. Weston and C. Watkins, “Multi-class Support Vector Machines,” 1998.
[48]B. Wen, Y. Zhu, R. Subramanian, T.-T. Ng, X. Shen, and S. Winkler, “COVERAGE — A novel database for copy-move forgery detection,” in 2016 IEEE International Conference on Image Processing (ICIP), IEEE, Sep. 2016, pp. 161–165. doi: 10.1109/ICIP.2016.7532339.
[49]M. Castro, D. M. Ballesteros, and D. Renza, “A dataset of 1050-tampered color and grayscale images (CG-1050),” Data Brief, vol. 28, Feb. 2020, doi: 10.1016/j.dib.2019.104864.