Reinforcement Learning-Based Self-Healing Routing in Fault-prone Wireless Sensor Networks

PDF (771KB), PP.112-125

Views: 0 Downloads: 0

Author(s)

Dipti Chauhan 1,* Pritika Bahad 1 Jay Kumar Jain 2

1. Department of Artificial Intelligence & Data Science, Prestige Institute of Engineering Management & Research Indore M.P., India

2. Department of Mathematics, Bio-informatics & Computer Applications, Maulana Azad National Institute of Technology Bhopal, M.P., India

* Corresponding author.

DOI: https://doi.org/10.5815/ijcnis.2026.02.07

Received: 16 Nov. 2025 / Revised: 2 Jan. 2026 / Accepted: 25 Feb. 2026 / Published: 8 Apr. 2026

Index Terms

Fault Tolerance, Q-Learning, Self-Healing Routing, Reinforcement Learning (RL), Wireless Sensor Networks (WSNs)

Abstract

Wireless Sensor Networks consist of energy constrained sensor nodes that monitor and transmit data to a central base station. These networks are highly susceptible to link and node failures, which further degrades performance and reduce overall network reliability. In this paper we have addresses these challenges and proposed a reinforcement learning based self-healing routing (RL-SHR) protocol, implemented in NS2 simulation environment. In the work, each node functions as an autonomous RL agent that learns optimal routing paths by interacting with the network environment and adapting to failure conditions. The protocol enables nodes to dynamically avoid unreliable paths, recover from faults, and optimize performance over time. Simulation results shows that the proposed protocol significantly outperforms traditional routing protocols such as AODV and DSR in terms of packet delivery ratio, end-to-end delay, energy consumption and network lifetime under varying failure scenarios. This work lays the groundwork for integrating learning based resilience mechanisms into next generation sensor networks.

Cite This Paper

Dipti Chauhan, Pritika Bahad, Jay Kumar Jain, "Reinforcement Learning-Based Self-Healing Routing in Fault-prone Wireless Sensor Networks", International Journal of Computer Network and Information Security(IJCNIS), Vol.18, No.2, pp.112-125, 2026. DOI:10.5815/ijcnis.2026.02.07

Reference

[1]D. Chauhan, J. Jain, and P. Bahad, “Performance Evaluation of 802.11a/G Wireless Networks With Ip6hc,” Journal of Management Information and Decision Sciences, vol. 24, no. S3, pp. 1–7, 2021, Accessed: Mar. 05, 2026. [Online]. Available: https://www.abacademies.org/articles/performance-evaluation-of-80211ag-wireless-networks-with-ip6hc.pdf 
[2]Y. Huang, J.-F. Martínez, J. Sendra, and L. López, “ResilientWireless Sensor Networks Using Topology Control: A Review,” Sensors, vol. 15, no. 10, pp. 24735–24770, Sep. 2015, doi: https://doi.org/10.3390/s151024735.
[3]J. K. Jain and D. Chauhan, “Optimized secure and energy-efficient approach for IoT-enabled wireless sensor networks,” Pervasive and Mobile Computing, vol. 110, p. 102049, May 2025, doi: https://doi.org/10.1016/j.pmcj.2025.102049.
[4]D. Chauhan and S. Sharma, “Performance Evaluation of Different Routing Protocols in IPv4 and IPv6 Networks on the Basis of Packet Sizes,” Procedia Computer Science, vol. 46, pp. 1072–1078, 2015, doi: https://doi.org/10.1016/j.procs.2015.01.019.
[5]H. Fang et al., “Self-Healing in Knowledge-Driven Autonomous Networks: Context, Challenges, and Future Directions,” IEEE Network, pp. 1–1, 2024, doi: https://doi.org/10.1109/mnet.2024.3416850.
[6]K. Sivamayil, E. Rajasekar, B. Aljafari, S. Nikolovski, S. Vairavasundaram, and I. Vairavasundaram, “A Systematic Study on Reinforcement Learning Based Applications,” Energies, vol. 16, no. 3, p. 1512, Jan. 2023, doi: https://doi.org/10.3390/en16031512.
[7]M. Ganjalizadeh, H. S. Ghadikolaei, A. Azari, A. Alabbasi, and M. Petrova, “Saving Energy and Spectrum in Enabling URLLC Services: A Scalable RL Solution,” IEEE Transactions on Industrial Informatics, vol. 19, no. 10, pp. 10265–10276, Oct. 2023, doi: https://doi.org/10.1109/tii.2023.3240592.
[8]V. Rajagopal, B. Velusamy, and S. Rathinasamy, “Double Q‐learning‐based adaptive trajectory selection for energy‐efficient data collection in wireless sensor networks,” International Journal of Communication Systems, vol. 36, no. 7, Feb. 2023, doi: https://doi.org/10.1002/dac.5452.
[9]A. Keerthika and V. Berlin Hency, “Reinforcement-Learning based energy efficient optimized routing protocol for WSN,” Peer-to-Peer Networking and Applications, Mar. 2022, doi: https://doi.org/10.1007/s12083-022-01315-6.
[10]T. Zhang, K. Zhu, and E. Hossain, “Data-Driven Machine Learning Techniques for Self-Healing in Cellular Wireless Networks: Challenges and Solutions,” Intelligent Computing, vol. 2022, Jan. 2022, doi: https://doi.org/10.34133/2022/9758169.
[11]D. Godfrey, B. Suh, B. H. Lim, K.-C. Lee, and K.-I. Kim, “An Energy-Efficient Routing Protocol with Reinforcement Learning in Software-Defined Wireless Sensor Networks,” Sensors, vol. 23, no. 20, p. 8435, Jan. 2023, doi: https://doi.org/10.3390/s23208435.
[12]N. Alsalmi, K. Navaie, and H. Rahmani, “Energy and throughput efficient mobile wireless sensor networks: A deep reinforcement learning approach,” IET networks, May 2024, doi: https://doi.org/10.1049/ntw2.12126.