Work place: Department of Mechanical Engineering, Netaji Subhas Institute of Technology, Amhara, Bihta, Patna, India.

E-mail: pushpamsinha1@gmail.com

Website:

Research Interests: Computational Engineering, Computational Science and Engineering

Biography

Pushpam Kumar Sinha is Assistant Professor in the Department of Mechanical Engineering at Netaji Subhas Institute of Technology, Amhara, Bihta, Patna, India. He did his Bachelor of Engineering (B.E.) in 1997 from Motilal Nehru Regional Engineering College, Allahabad, India with a Gold medal. He did his Master of Science in Engineering in 2002 from Indian Institute of Science, Bangalore

DOI: https://doi.org/10.5815/ijmsc.2020.04.02, Pub. Date: 8 Aug. 2020

When we are given a data set where in based upon the values and or characteristics of attributes each data point is assigned a class, it is known as classification. In machine learning a very simple and powerful tool to do this is the k-Nearest Neighbor (kNN) algorithm. It is based on the concept that the data points of a particular class are neighbors of each other. For a given test data or an unknown data, to find the class to which it is the neighbor one measures in kNN the Euclidean distances of the test data or the unknown data from all the data points of all the classes in the training data. Then out of the k nearest distances, where k is any number greater than or equal to 1, the class to which the test data or unknown data is the nearest most number of times is the class assigned to the test data or unknown data. In this paper, I propose a variation of kNN, which I call the ANN method (Alternative Nearest Neighbor) to distinguish it from kNN. The striking feature of ANN that makes it different from kNN is its definition of neighbor. In ANN the class from whose data points the maximum Euclidean distance of the unknown data is less than or equal to the maximum Euclidean distance between all the training data points of the class, is the class to which the unknown data is neighbor. It follows, henceforth, naturally that ANN gives a unique solution to each unknown data. Where as , in kNN the solution may vary depending on the value of the number of nearest neighbors k. So, in kNN, as k is varied the performance may vary too. But this is not the case in ANN, its performance for a particular training data is unique.

For the training data [1] considered in this paper, the ANN gives 100% accurate result.

[...] Read more.

By Pushpam Kumar Sinha Sonali Sinha

DOI: https://doi.org/10.5815/ijmsc.2019.04.02, Pub. Date: 8 Nov. 2019

We choose a better pseudo-random number generator from a list of eight pseudo-random number generators derived from the library function rand() in C/C++, including rand(); i.e. a random number generator which is more random than all the others in the list. rand() is a repeatable pseudo-random number generator. It is called pseudo because it uses a specific formulae to generate random numbers, i.e. to speak the numbers generated are not truly random in strict literal sense. There are available several tests of randomness, some are easy to pass and others are difficult to pass. However we do not subject the eight set of pseudo random numbers we generate in this work to any known tests of randomness available in literature. We use statistical technique to compare these eight set of random numbers. The statistical technique used is correlation coefficient.

[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals