IJIGSP Vol. 17, No. 4, 8 Aug. 2025
Cover page and Table of Contents: PDF (size: 1268KB)
PDF (1268KB), PP.49-67
Views: 0 Downloads: 0
In-loop filter, Deblocking filter, Sample Adaptive Offset filter, Residual Encoder-Decoder Network, High Efficiency Video Coding, Convolutional Neural Network
High Efficiency Video Coding (HEVC) often known as H.265 is a video compression method that outperforms its predecessor H.264. In HEVC, an in-loop filter is an additional processing step that removes compressing artifacts from decoding video frames while improving visual quality. This research article proposes an improved in-loop filter that incorporates a Residual Encoder-Decoder Network based Deblocking Filter (REDNetDF) and a Convolutional Neural Network based Sample Adaptive Offset (CNN-SAO) filter, which together eliminates the smallest range of artifacts in compression video frames. The quantization frame is subjected to REDNetDF, which removes a minute number of blocking artifacts from the compressed frame. To eliminate the ringing artifacts in the compressed frame, CNN-SAO filter is used. The proposed method is used to evaluate the publicly available UVG dataset. To demonstrate efficiency, the new model is evaluated using a variety of metrics. The outcome of this study provides better results like PSNR of 49.7 dB and the SSIM of 0.97 in comparison with other techniques. Besides, the model's outcome indicates an MSE of 1.8 and saves 24.9% more bits on average to provide the same level of quality as previous techniques. The proposed framework also suppresses time complexities regarding encoding and decoding times with the results of 90.5 and 4.5 seconds on average correspondingly.
Vanishree Moji, Bharathi Gururaj, Mathivanan Murugavelu, "Enhancing In-loop Filter of HEVC with Integrated Residual Encoder-Decoder Network and Convolutional Neural Network", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.17, No.4, pp. 49-67, 2025. DOI:10.5815/ijigsp.2025.04.04
[1]T. Li, M. Xu, C. Zhu, R. Yang, Z. Wang, and Z. Guan, "A deep learning approach for multi-frame in-loop filter of HEVC," IEEE Trans. Image Process., vol. 28, no. 11, pp. 5663–5678, Nov. 2019.
[2]S. Kuanar, K. R. Rao, C. Conly, and N. Gorey, "Deep learning based HEVC in-loop filter and noise reduction," Signal Process. Image Commun., vol. 99, Art. no. 116409, 2021.
[3]Z. Pan, X. Yi, Y. Zhang, B. Jeon, and S. Kwong, "Efficient in-loop filtering based on enhanced deep convolutional neural networks for HEVC," IEEE Trans. Image Process., vol. 29, pp. 5352–5366, 2020.
[4]S. Kuanar, C. Conly, and K. R. Rao, "Deep learning based HEVC in-loop filtering for decoder quality enhancement," in Proc. 2018 Picture Coding Symp. (PCS), pp. 164–168, Jun. 2018.
[5]Y. Zhang, T. Shen, X. Ji, Y. Zhang, R. Xiong, and Q. Dai, "Residual highway convolutional neural networks for in-loop filtering in HEVC," IEEE Trans. Image Process., vol. 27, no. 8, pp. 3827–3841, Aug. 2018.
[6]G. Raja, A. Khan, A. K. Khan, and M. H. Yousaf, "Performance analysis of HEVC in-loop filter," Life Sci. J., vol. 10, pp. 331–336, 2013.
[7]I. Hautala, J. Boutellier, J. Hannuksela, and O. Silven, "Programmable low-power multicore coprocessor architecture for HEVC/H.265 in-loop filtering," IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 7, pp. 1217–1230, Jul. 2014.
[8]V. Moji and M. Murugavelu, "Adaptive and efficient hybrid in-loop filter based on enhanced generative adversarial networks with sample adaptive offset filter for HEVC/H.265," Period. Polytech. Electr. Eng. Comput. Sci., 2023. [Online]. Available: https://doi.org/10.3311/PPee.20881
[9]X. Meng, C. Chen, S. Zhu, and B. Zeng, "A new HEVC in-loop filter based on multi-channel long-short-term dependency residual networks," in Proc. 2018 Data Compression Conf., pp. 187–196, Mar. 2018.
[10]W. S. Park and M. Kim, "CNN-based in-loop filtering for coding efficiency improvement," in Proc. 2016 IEEE 12th IVMSP Workshop, pp. 1–5, Jul. 2016.
[11]W. Sun, X. He, H. Chen, S. Xiong, and Y. Xu, "A nonlocal HEVC in-loop filter using CNN-based compression noise estimation," Appl. Intell., vol. 52, no. 15, pp. 17810–17828, 2022.
[12]X. Lu, C. Yu, and X. Jin, "A fast HEVC intra-coding algorithm based on texture homogeneity and spatio-temporal correlation," EURASIP J. Adv. Signal Process., vol. 2018, Art. no. 1, 2018.
[13]Z. Pan, X. Yi, Y. Zhang, B. Jeon, and S. Kwong, "Efficient in-loop filtering based on enhanced deep convolutional neural networks for HEVC," IEEE Trans. Image Process., vol. 29, pp. 5352–5366, 2020.
[14]Q. Xu, X. Jiang, T. Sun, and A. C. Kot, "Detection of transcoded HEVC videos based on in-loop filtering and PU partitioning analyses," Signal Process. Image Commun., vol. 92, Art. no. 116109, 2021.
[15]A. Dhanalakshmi and G. Nagarajan, "Combined spatial temporal-based in-loop filter for scalable extension of HEVC," ICT Express, vol. 6, no. 4, pp. 306–311, 2020.
[16]P. R. Christopher and S. Sathasivam, "Quality assessment of dual-parallel edge deblocking filter architecture for HEVC/H.265," Appl. Sci., vol. 12, no. 24, Art. no. 12952, 2022.
[17]S. Y. Lim, M. K. Choi, and Y. L. Lee, "Frequency-based adaptive interpolation filter in intra prediction," Appl. Sci., vol. 13, no. 3, Art. no. 1475, 2023.
[18]T. Mallikarachchi, D. Talagala, H. Kodikara Arachchi, C. Hewage, and A. Fernando, "A decoding-complexity and rate-controlled video-coding algorithm for HEVC," Future Internet, vol. 12, no. 7, Art. no. 120, 2020.
[19]M. Li and W. Ji, "Screen content-aware video coding through non-local model embedded with intra-inter in-loop filtering," IEEE Trans. Circuits Syst. Video Technol., vol. 35, no. 2, pp. 1870–1883, Feb. 2025. [Online]. Available: https://doi.org/10.1109/TCSVT.2024.3473543
[20]F. Bossen, B. Bross, K. Suhring, and D. Flynn, "HEVC complexity and implementation analysis," IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1685–1696, Dec. 2012.
[21]M. Zhou, W. Gao, M. Jiang, and H. Yu, "HEVC lossless coding and improvements," IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1839–1843, Dec. 2012.
[22]G. J. Sullivan and J. R. Ohm, "Recent developments in standardization of high-efficiency video coding (HEVC)," in Proc. Appl. Digit. Image Process. XXXIII, vol. 7798, pp. 239–245, 2010.
[23]I. K. Kim, J. Min, T. Lee, W. J. Han, and J. Park, "Block partitioning structure in the HEVC standard," IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1697–1706, Dec. 2012.
[24]J. L. Lin, Y. W. Chen, Y. W. Huang, and S. M. Lei, "Motion vector coding in the HEVC standard," IEEE J. Sel. Topics Signal Process., vol. 7, no. 6, pp. 957–968, Dec. 2013.
[25]A. Mercat, M. Viitanen, and J. Vanne, "UVG dataset: 50/120fps 4K sequences for video codec analysis and development," in Proc. 11th ACM Multimedia Syst. Conf., pp. 297–302, May 2020. [Online]. Available: https://ultravideo.fi/dataset.html and Available: https://doi.org/10.1145/3339825.3394937
[26]Z. Pan, X. Yi, Y. Zhang, B. Jeon, and S. Kwong, "Efficient in-loop filtering based on enhanced deep convolutional neural networks for HEVC," IEEE Trans. Image Process., vol. 29, pp. 5352–5366, 2020.
[27]A. Dhanalakshmi and G. Nagarajan, "Convolutional neural network-based deblocking filter for SHVC in H.265," Signal Image Video Process., vol. 14, no. 8, pp. 1635–1645, 2020.
[28]C. Jia, S. Wang, X. Zhang, S. Wang, J. Liu, S. Pu, and S. Ma, "Content-aware convolutional neural network for in-loop filtering in high efficiency video coding," IEEE Trans. Image Process., vol. 28, no. 7, pp. 3343–3356, Jul. 2019.
[29]Y. Li, L. Li, Z. Zhuang, Y. Fang, H. Peng, and N. Ling, "Transformer‐based data‐driven video coding acceleration for industrial applications," Math. Probl. Eng., vol. 2022, Art. no. 1440323, 2022.
[30]N. Li, Y. Zhang, L. Zhu, W. Luo, and S. Kwong, "Reinforcement learning based coding unit early termination algorithm for high efficiency video coding," J. Vis. Commun. Image Represent., vol. 60, pp. 276–286, 2019.
[31]P. Du, Y. Liu, N. Ling, L. Liu, Y. Ren, and M. K. Hsu, "A generative adversarial network for video compression," in Proc. SPIE Big Data IV: Learning, Analytics, and Applications, vol. 12097, pp. 129–136, 2022.