Dinakaran. M

Work place: School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India

E-mail: dinakaran.m@vit.ac.in


Research Interests: Computational Science and Engineering, Image Compression, Image Manipulation, Computer Networks, Image Processing, Information Retrieval


Dr. Dinakaran M has completed his B.Tech (IT), M.Tech (IT-Networking) in Vellore Institute of Technology and Ph.D in Anna University, Chennai, Tamil Nadu, India. He worked in TATA Consultancy Services as Assistant System Manager from September 2006 to July 2009. Currently he is working as Associate Professor and Head of the Department in School of Information Technology and Engineering, VIT University, Vellore. He has published 25 articles in various International Conferences and Journals. His research interests are mobile networks, image retrieval and machine learning.

Author Articles
A New Quantum Tunneling Particle Swarm Optimization Algorithm for Training Feedforward Neural Networks

By Geraldine Bessie Amali. D Dinakaran. M

DOI: https://doi.org/10.5815/ijisa.2018.11.07, Pub. Date: 8 Nov. 2018

In this paper a new Quantum Tunneling Particle Swarm Optimization (QTPSO) algorithm is proposed and applied to the training of feedforward Artificial Neural Networks (ANNs). In the classical Particle Swarm Optimization (PSO) algorithm the value of the cost function at the location of the personal best solution found by each particle cannot increase. This can significantly reduce the explorative ability of the entire swarm. In this paper a new PSO algorithm in which the personal best solution of each particle is allowed to tunnel through hills in the cost function analogous to the Tunneling effect in Quantum Physics is proposed. In quantum tunneling a particle which has insufficient energy to cross a potential barrier can still cross the barrier with a small probability that exponentially decreases with the barrier length. The introduction of the quantum tunneling effect allows particles in the PSO algorithm to escape from local minima thereby increasing the explorative ability of the PSO algorithm and preventing premature convergence to local minima. The proposed algorithm significantly outperforms three state-of-the-art PSO variants on a majority of benchmark neural network training problems.

[...] Read more.
Other Articles