Accident Detection and Estimation of Vehicle Speed and Count by Type in Road CCTV Images Using Machine Vision

PDF (1211KB), PP.183-201

Views: 0 Downloads: 0

Author(s)

I. Kadek Rai Pramana 1,* I. Putu Agung Bayupati 1 Gusti Made Arya Sasmita 1 Ngoc Le 2

1. Department of Information Technology, Faculty of Engineering, Udayana University, Badung, Indonesia

2. Faculty of Information Technology, Swinburne Vietnam – FPT University, Hanoi, Vietnam

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2026.02.11

Received: 21 Nov. 2025 / Revised: 1 Jan. 2026 / Accepted: 13 Feb. 2026 / Published: 8 Apr. 2026

Index Terms

Detection, Accident, Vehicle, Machine Vision, YOLOv11

Abstract

This study presents an integrated traffic monitoring system for accident detection, vehicle counting by type, and vehicle speed estimation using roadside Closed-Circuit Television (CCTV) footage and machine vision based on the YOLOv11 architecture. The proposed methodology comprises data collection from heterogeneous sources, data preprocessing and augmentation, model fine-tuning on a custom Vehicle–Accident dataset, system deployment through a web-based application, and real-world evaluation. The YOLOv11 models were optimized to detect multiple vehicle categories and clearly defined accident classes under real traffic conditions. Experimental results indicate that the YOLOv11 Large (l) model achieves superior detection performance, with 81.8% precision, 75.8% recall, 82.1% mAP50, and 53.3% mAP50–95. Real-world testing further confirms its effectiveness, yielding an object detection accuracy of 99.24% and low speed estimation errors, with Mean Absolute Percentage Error (MAPE) of 3.56% for video-based evaluation and 5.54% for real-time evaluation. In contrast, the YOLOv11 Nano (n) model offers faster inference and lower computational requirements but exhibits reduced robustness in complex accident scenarios. The trained models are deployed in an interactive web application supporting image, video, and real-time inputs, enabling practical traffic monitoring and decision support. Overall, the YOLOv11l-Vehicle-Accident model is identified as the most suitable configuration for accuracy-critical traffic management systems, while Nano variants are better suited for resource-constrained deployments.

Cite This Paper

I. Kadek Rai Pramana, I. Putu Agung Bayupati, Gusti Made Arya Sasmita, Ngoc Le, "Accident Detection and Estimation of Vehicle Speed and Count by Type in Road CCTV Images Using Machine Vision", International Journal of Information Technology and Computer Science(IJITCS), Vol.18, No.2, pp.183-201, 2026. DOI:10.5815/ijitcs.2026.02.11

Reference

[1]M. Tzampazaki, C. Zografos, E. Vrochidou, and G. A. Papakostas, “Machine Vision—Moving from Industry 4.0 to Industry 5.0,” Applied Sciences (Switzerland), vol. 14, no. 4, pp. 1471–1503, 2024, doi: 10.3390/app14041471.
[2]I. G. P. S. Wijaya, M. N. Azmi, and A. Y. Husodo, “Comparative Analysis of YOLOv8 and HSV Methods for Traffic Density Measurement,” Lontar Komputer: Jurnal Ilmiah Teknologi Informasi, vol. 15, no. 2, pp. 134–148, 2024, doi: 10.24843/LKJITI.2024.v15.i02.p06.
[3]N. Suciati, N. P. Sutramiani, and D. Siahaan, “LONTAR_DETC: Dense and High Variance Balinese Character Detection Method in Lontar Manuscripts,” IEEE Access, vol. 10, pp. 14600–14609, 2022, doi: 10.1109/ACCESS.2022.3147069.
[4]I. M. A. D. Suarjaya et al., “Deep Learning Model Size Performance Evaluation for Lightning Whistler Detection on Arase Satellite Dataset,” Remote Sens (Basel), vol. 16, no. 22, 2024, doi: 10.3390/rs16224264.
[5]I. K. Gunawan, I. P. A. Bayupati, K. S. Wibawa, I. M. Sukarsa, and L. A. Kurniawan, “Indonesian Plate Number Identification Using YOLACT and Mobilenetv2 in the Parking Management System,” JUITA: Jurnal Informatika, vol. 9, no. 1, pp. 69–76, 2021.
[6]J. S. W. Hutauruk, T. Matulatan, and N. Hayaty, “Deteksi Kendaraan Secara Real Time Menggunakan Metode YOLO Berbasis Android,” Jurnal Sustainable: Jurnal Hasil Penelitian dan Industri Terapan, vol. 9, no. 1, pp. 8–14, 2020.
[7]F. Rofii, G. Priyandoko, M. I. Fanani, and A. Suraji, “Peningkatan Akurasi Penghitungan Jumlah Kendaraan dengan Membangkitkan Urutan Identitas Deteksi Berbasis Yolov4 Deep Neural Networks,” TEKNIK, vol. 42, no. 2, pp. 169–177, 2021, doi: 10.14710/teknik.v42i2.37019.
[8]M. Zulfikri, Hairani, Ahmad, K. Abd. Latif, R. Hammad, and Moch. Syahrir, “Deteksi dan Estimasi Kecepatan Kendaraan dalam Sistem Pengawasan Lalu Lintas Menggunakan Pengolahan Citra,” Techno.Com, vol. 20, no. 3, pp. 455–467, 2021.
[9]D. I. Mulyana and M. A. Rofik, “Implementasi Deteksi Real Time Klasifikasi Jenis Kendaraan Di Indonesia Menggunakan Metode YOLOV5,” Jurnal Pendidikan Tambusai, vol. 6, no. 3, pp. 13971–13982, 2022.
[10]A. Rezky, A. Bagir, D. Pamerean, and F. Makhrus, “Deteksi Kecelakaan Lalu Lintas Otomatis Pada Rekaman CCTV Indonesia Menggunakan Deep Learning,” Buletin Pagelaran Mahasiswa Nasional Bidang Teknologi Informasi dan Komunikasi, vol. 1, no. 1, pp. 1–5, 2023.
[11]L. A. Kurniawan, I. P. A. Bayupati, and K. S. Wibawa, “Sistem Hitung Kendaraan Berdasarkan Jenis Menggunakan Metode Background Subtraction,” Jurnal Ilmiah Teknologi dan Komputer, vol. 1, no. 2, pp. 265–273, 2021.
[12]L. A. Kurniawan, I. P. A. Bayupati, K. S. Wibawa, I. M. Sukarsa, and I. K. Gunawan, “Sistem Klasifikasi Jenis dan Warna Kendaraan Secara Real-time Menggunakan Metode k-Nearest Neighbor dan Framework YOLACT,” Jurnal Edukasi dan Penelitian Informatika (JEPIN), vol. 7, no. 1, pp. 12–17, 2021.
[13]C. J. Lin, S. Y. Jeng, and H. W. Lioa, “A Real-Time Vehicle Counting, Speed Estimation, and Classification System Based on Virtual Detection Zone and YOLO,” Math Probl Eng, vol. 2021, no. 1, pp. 1–10, 2021, doi: 10.1155/2021/1577614.
[14]    Y. Zhang, Z. Guo, J. Wu, Y. Tian, H. Tang, and X. Guo, “Real-Time Vehicle Detection Based on Improved YOLOv5,” Sustainability (Switzerland), vol. 14, no. 19, pp. 12274–12292, 2022, doi: 10.3390/su141912274.
[15]Z. Chen, L. Cao, and Q. Wang, “YOLOv5-Based Vehicle Detection Method for High-Resolution UAV Images,” Mobile Information Systems, vol. 2022, no. 1, pp. 1–11, 2022, doi: 10.1155/2022/1828848.
[16]Puyush, “Real Time Accident Detection Dataset,” Roboflow Universe. Accessed: Nov. 08, 2024. [Online]. Available: https://universe.roboflow.com/puyush-fipgg/real-time-accident-detection
[17]P. Zhu et al., “Detection and Tracking Meet Drones Challenge,” IEEE Trans Pattern Anal Mach Intell, vol. 44, no. 11, pp. 7380–7399, 2021, [Online]. Available: http://arxiv.org/abs/2001.06303
[18]N. Pethiyagoda, “Vehicle Dataset for YOLO,” Dataset Ninja. Accessed: Nov. 08, 2024. [Online]. Available: https://datasetninja.com/vehicle-dataset-for-yolo
[19]Z. Song et al., “Synthetic Datasets for Autonomous Driving: A Survey,” ArXiv, vol. 2304.12205, pp. 1–19, 2024, [Online]. Available: http://arxiv.org/abs/2304.12205
[20]A. Kazemi, Q. ul ain Fatima, V. Kindratenko, and C. Tessum, “AIDOVECL: AI-generated Dataset of Outpainted Vehicles for Eye-level Classification and Localization,” ArXiv, vol. 2410.24116, pp. 1–34, 2025, [Online]. Available: http://arxiv.org/abs/2410.24116
[21]N. Jegham, C. Y. Koh, M. Abdelatti, and A. Hendawi, “YOLO Evolution: A Comprehensive Benchmark and Architectural Review of YOLOv12, YOLO11, and Their Previous Versions,” ArXiv, vol. 2411.00201v2, pp. 1–22, 2024, [Online]. Available: http://arxiv.org/abs/2411.00201
[22]A. F. Rasheed and M. Zarkoosh, “YOLOv11 Optimization for Efficient Resource Utilization,” ArXiv, vol. 2412.14790, pp. 1–12, 2024, [Online]. Available: http://arxiv.org/abs/2412.14790
[23]R. Khanam and M. Hussain, “YOLOv11: An Overview of the Key Architectural Enhancements,” ArXiv, vol. 2410.17725, pp. 1–9, 2024, [Online]. Available: http://arxiv.org/abs/2410.17725
[24]T. Jiang and Y. Zhong, “ODverse33: Is the New YOLO Version Always Better? A Multi Domain benchmark from YOLO v5 to v11,” ArXiv, vol. 2502.14314v2, pp. 1–20, 2025, [Online]. Available: http://arxiv.org/abs/2502.14314
[25]M. A. R. Alif, “YOLOv11 for Vehicle Detection: Advancements, Performance, and Applications in Intelligent Transportation Systems,” ArXiv, vol. 2410.22898, pp. 1–16, 2024, [Online]. Available: http://arxiv.org/abs/2410.22898
[26]R. Sapkota et al., “YOLOv11 to Its Genesis: A Decadal and Comprehensive Review of The You Only Look Once (YOLO) Series,” ArXiv, vol. 2406.19407v5, pp. 1–26, 2025, [Online]. Available: http://arxiv.org/abs/2406.19407v5
[27]N. Dahiya et al., “Hyper-parameter Tuned Deep Learning Approach for Effective Human Monkeypox Disease Detection,” Sci Rep, vol. 13, no. 1, pp. 15930–15947, 2023, doi: 10.1038/s41598-023-43236-1.
[28]Z. Ning, X. Wu, J. Yang, and Y. Yang, “MT-YOLOv5: Mobile Terminal Table Detection Model Based on YOLOv5,” J Phys Conf Ser, vol. 1978, no. 1, pp. 1–9, 2021, doi: 10.1088/1742-6596/1978/1/012010.
[29]A. E. Maxwell, T. A. Warner, and L. A. Guillén, “Accuracy Assessment in Convolutional Neural Network-Based Deep Learning Remote Sensing Studies—Part 1: Literature Review,” Remote Sens (Basel), vol. 13, no. 13, pp. 209–220, 2021, doi: 10.3390/rs13132450.
[30]F. Liantoni and A. Agusti, “Forecasting Bitcoin using Double Exponential Smoothing Method Based on Mean Absolute Percentage Error,” JOIV: International Journal on Informatics Visualization, vol. 4, no. 2, pp. 91–95, 2020.
[31]I. K. R. Pramana, “Vehicle-AccidentYoloApp,” GitHub. Accessed: Jan. 08, 2025. [Online]. Available: https://github.com/rai-pramana/Vehicle-AccidentYoloApp
[32]Ultralytics Inc, “Ultralytics YOLO Docs,” Ultralytics. Accessed: Jan. 08, 2025. [Online]. Available: https://docs.ultralytics.com/#how-can-ultralytics-yolo-be-used-for-real-time-object-tracking
[33]P. Skalski, “How to Estimate Speed with Computer Vision,” Roboflow Blog. Accessed: Jan. 08, 2025. [Online]. Available: https://blog.roboflow.com/estimate-speed-computer-vision/