Design and Implementation of Intelligent Traffic Control Systems with Vehicular Ad Hoc Networks

PDF (1194KB), PP.55-65

Views: 0 Downloads: 0

Author(s)

Osita Miracle Nwakeze 1,* Christopher Odeh 2 Obaze Caleb Akachukwu 2

1. Department of Computer Science, Chukwuemeka Odumegwu Ojukwu University, Uli, Anambra State

2. Department of Computer science, Dennis Osadebay University Asaba, Delta State, Nigeria

* Corresponding author.

DOI: https://doi.org/10.5815/ijmsc.2026.01.05

Received: 2 Nov. 2025 / Revised: 11 Dec. 2025 / Accepted: 20 Jan. 2026 / Published: 8 Feb. 2026

Index Terms

Intelligent Traffic Control System (ITCS), Reinforcement Learning, Vehicular Ad Hoc Networks (VANETs), Traffic Signal Optimization, Urban Traffic Management

Abstract

Urban traffic congestion can be considered as a significant problem, and it contributes to long travel periods, fuel usage, and environmental influence. This paper introduces an Intelligent Traffic Control System (ITCS) that consists of Vehicular Ad Hoc Networks (VANETs) and Reinforcement Learning (RL) to optimise the control of traffic signals. The system facilitates real-time two-way communication between vehicles and roadside units, which means that an RL agent can control signal phases adaptively according to the traffic metrics like the average delay, the queue length, and traffic throughput. The Kaggle VANET Malicious Node Dataset was used to simulate malicious or unreliable nodes and test the robustness of the systems. The RL agent has been trained on the SUMO simulator trained on TraCI through various episodes and learns to take actions that increase traffic movement with a minimum amount of congestion. The results of training are progressive, as cumulative rewards grow, and average delays and queue length reduce with epochs. Performance evaluation of the ITCS under peak-hour, off-peak, incident, and malicious node scenarios demonstrated substantial gains over conventional fixed-time controllers, with average delays reduced by 48–55%, queue lengths by 49–57%, and throughput increased by 28–35%. These results indicate the success of the blend of reinforcement learning with VANET-supported traffic control, which is an adaptive, data-driven, and robust solution to an urban intersection. Not only the RL-based ITCS enhances traffic flow and congestion, but is also resistant to communication anomalies, which indicates its scalability to be deployed in the current smart city traffic management.

Cite This Paper

Osita Miracle Nwakeze, Christopher Odeh, Obaze Caleb Akachukwu, "Design and Implementation of Intelligent Traffic Control Systems with Vehicular Ad Hoc Networks", International Journal of Mathematical Sciences and Computing(IJMSC), Vol.12, No.1, pp. 55-65, 2026. DOI: 10.5815/ijmsc.2026.01.05

Reference

[1]Z. Xia, J. Wu, L. Wu, Y. Chen, J. Yang, and P. S. Yu, “A comprehensive survey of the key technologies and challenges surrounding vehicular ad hoc networks,” ACM Transactions on Intelligent Systems and Technology, vol. 12, no. 4, Art. 37, 2021. doi: 10.1145/3451984.
[2]A. Dutta, L. M. Samaniego Campoverde, M. Tropea, and F. De Rango, “A comprehensive review of recent developments in VANET for traffic, safety & remote monitoring applications,” Journal of Network and Systems Management, vol. 32, Art. 73, 2024. doi: 10.1007/s10922-024-09853-5.
[3]M. M. Hamdi, L. Audah, S. A. Rashid, and S. Alani, “VANET-based traffic monitoring and incident detection system: A review,” International Journal of Electrical and Computer Engineering, vol. 11, no. 4, pp. 3193–3200, 2021. doi: 10.11591/ijece.v11i4.pp3193-3200.
[4]O. M. Nwakeze and N. U. Mohammed, “Intelligent cyber threat detection and mitigation system for network security improvement using artificial neural network,” American Journal of Sciences and Engineering Research, vol. 8, no. 4, pp. 48–56, 2025.
[5]R. M. A. Latif, M. Jamil, J. He, and M. Farhan, “A novel authentication and communication protocol for urban traffic monitoring in VANETs based on cluster management,” Systems, vol. 11, Art. 322, 2023. doi: 10.3390/systems11070322.
[6]W. Chango, P. Buñay, J. Erazo, P. Aguilar, J. Sayago, A. Flores, and G. Silva, “Predicting urban traffic congestion with VANET data,” Computation, vol. 13, no. 4, Art. 92, 2025. doi: 10.3390/computation13040092.
[7]B. Jadhav, M. Sayyed, and S. K. Gupta, “AI and VANETs in smart cities: The next frontier in traffic management,” in AI-Driven Transportation Systems: Real-Time Applications and Related Technologies, Springer, 2025, pp. 23–39. doi: 10.1007/978-3-031-98349-8_2.
[8]B. Jadhav, A. Khang, and P. Kulkarni, “Artificial intelligence-based model and applications in business decision-making,” in AI-Centric Modeling and Analytics, CRC Press, 2023, pp. 1–12.
[9]R. A. Mahajan, R. Dey, P. N. Mahalle, V. S. Deshpande, and M. Khan, “Smart cities transformation: From conventional traffic management to AI-enhanced VANETs,” in AI-Driven Transportation Systems, Springer, 2025, pp. 1–22. doi: 10.1007/978-3-031-98349-8_1.
[10]A. Omoseebi, G. Ola, and J. Tyler, “Data preparation and feature engineering,” ResearchGate, 2024. [Online]. Available: https://www.researchgate.net/publication/389860294_Data_Preparation_and_Feature_Engineering
[11]P. Yasodha, “Data preprocessing methods for machine learning: An empirical comparison,” International Journal of Fundamental and Multidisciplinary Research, vol. 3, 48569, 2025. [Online]. Available: https://www.ijfmr.com/papers/2025/3/48569.pdf
[12]GeeksforGeeks, “Normalization and scaling,” Jul. 23, 2025. [Online]. Available: https://www.geeksforgeeks.org/data-analysis/normalization-and-scaling/
[13]N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, 2002. doi: 10.1613/jair.953.
[14]V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015. doi: 10.1038/nature14236.