Cover page and Table of Contents: PDF (size: 596KB)
Full Text (PDF, 596KB), PP.13-22
Views: 0 Downloads: 0
Neural network, neural network training, neural network verification, reachability set, verification criterion.
One of the trends in information technologies is implementing neural networks in modern software packages . The fact that neural networks cannot be directly programmed (but trained) is their distinctive feature. In this regard, the urgent task is to ensure sufficient speed and quality of neural network training procedures. The process of neural network training can differ significantly depending on the problem. There are verification methods that correspond to the task’s constraints; they are used to assess the training results. Verification methods provide an estimate of the entire cardinal set of examples but do not allow to estimate which subset of those causes a significant error. This fact leads to neural networks’ failure to perform with the given set of hyperparameters, making training a new one time-consuming.
On the other hand, existing empirical assessment methods of neural networks training use discrete sets of examples. With this approach, it is impossible to say that the network is suitable for classification on the whole cardinal set of examples.
This paper proposes a criterion for assessing the quality of classification results. The criterion is formed by describing the training states of the neural network. Each state is specified by the correspondence of the set of errors to the function range representing a cardinal set of test examples. The criterion usage allows tracking the network’s classification defects and marking them as safe or unsafe. As a result, it is possible to formally assess how the training and validation data sets must be altered to improve the network’s performance, while existing verification methods do not provide any information on which part of the dataset causes the network to underperform.
Zhengbing Hu, Mykhailo Ivashchenko, Lesya Lyushenko, Dmytro Klyushnyk, " Artificial Neural Network Training Criterion Formulation Using Error Continuous Domain", International Journal of Modern Education and Computer Science(IJMECS), Vol.13, No.3, pp. 13-22, 2021.DOI: 10.5815/ijmecs.2021.03.02
 Thomas Ritter, Carsten Lund Pedersen. Digitization capability and the digitalization of business models in business-to-business firms: Past, present, and future. In: Industrial Marketing Management, Vol. 86, pp. 180-190., April 2020.
 Kaur, Gurmeet & Bajaj, Karan. News Classification using Neural Networks. In: Communications on Applied Electronics, Vol. 5 (1), DOI:10.5120/cae2016652224, pp. 42-45, 2016.
 E. Min, X. Guo, Q. Liu, G. Zhang, J. Cui and J. Long. A Survey of Clustering With Deep Learning: From the Perspective of Network Architecture. In: IEEE Access, vol. 6, DOI: 10.1109/ACCESS.2018.2855437, pp. 39501-39514, 2018.
 Toviah Moldwin, Idan Segev. Perceptron Learning and Classification in a Modeled Cortical Pyramidal Cell. In: Frontiers in Computational Neuroscience, 24 April 2020.
 Khan, A., Sohail, A., Zahoora, U. et al. A survey of the recent architectures of deep convolutional neural networks. In: Artificial Intelligence Review 53, 5455–5516, 2020.
 Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385v1. 2015.
 Pengfei Liu, Xipeng Qiu, Xuanjing Huang. Recurrent Neural Network for Text Classification with Multi-Task Learning. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. arXiv preprint arXiv:1512.03385v1. 2015.
 Andreas Venzke, Senior Member. Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications. arXiv preprint arXiv: 1910.01624v4. 2020.
 Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark Barrett, Mykel J. Kochenderfer. Algorithms for Verifying Deep Neural Networks. arXiv preprint arXiv:1903.06758v2. 2020.
 S. V. Rakovic, E. C. Kerrigan, D. Q. Mayne and J. Lygeros, “Reachability analysis of discrete-time systems with disturbances,” in IEEE Transactions on Automatic Control, vol. 51, no. 4, pp. 546-561, April 2006, doi: 10.1109/TAC.2006.872835.
 Henriksen P, Lomuscio. Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search. In: European Conference on Artificial Intelligence, 2020.
 Abien Fred Agarap. Deep Learning using Rectified Linear Units (ReLU). arXiv preprint arXiv:1803.08375v2. 2019.
 Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. arXiv preprint arXiv:1702.01135v2. 2017.
 Tran, Dung & Manzanas Lopez, Diego & Musau, Patrick & Yang, Xiaodong & Luan, Viet & Nguyen, Luan & Xiang, Weiming & Johnson, Taylor. (2019). Star-Based Reachability Analysis of Deep Neural Networks.
 Ajay Kumar, Shishir Kumar, "Density Based Initialization Method for K-Means Clustering Algorithm", International Journal of Intelligent Systems and Applications(IJISA), Vol.9, No.10, pp.40-48, 2017. DOI: 10.5815/ijisa.2017.10.05.
 Ahmed Fahim, “Finding the Number of Clusters in Data and Better Initial Centers for K-means Algorithm”, International Journal of Intelligent Systems and Applications(IJISA), Vol.12, No.6, pp.1-20, 2020. DOI: 10.5815/ijisa.2020.06.01.
 Anand Khandare, Abrar Alvi, "Efficient Clustering Algorithm with Enhanced Cohesive Quality Clusters", International Journal of Intelligent Systems and Applications(IJISA), Vol.10, No.7, pp.48-57, 2018. DOI: 10.5815/ijisa.2018.07.05.
 Mohammed A. H. Lubbad, Wesam M. Ashour, “Cosine-Based Clustering Algorithm Approach”, International Journal of Intelligent Systems and Applications(IJISA), vol.4, no.1, pp.53-63, 2012. DOI: 10.5815/ijisa.2012.01.07.
 Bikram K. Mishra, Amiya K. Rath, Santosh K. Nanda, Ritik R. Baidyanath, “Efficient Intelligent Framework for Selection of Initial Cluster Centers”, International Journal of Intelligent Systems and Applications(IJISA), Vol.11, No.8, pp.44-55, 2019. DOI: 10.5815/ijisa.2019.08.05.
 Zehao Douy, Stanley J. Osher, Bao Wangz. Mathematical Analysis of Adversarial Attacks. arXiv preprint arXiv:1811.06492v2. 2018.
 Ian Goodfellow, Nicolas Papernot, Sandy Huang, Rocky Duan, Pieter Abbeel, Jack Clark. Attacking Machine Learning with Adversarial Examples. Available at: https://openai.com/blog/adversarial-example-research/. 2017.
 S. Na, L. Xumin and G. Yong, “Research on k-means Clustering Algorithm: An Improved k-means Clustering Algorithm,” 2010 Third International Symposium on Intelligent Information Technology and Security Informatics, Jian, China, 2010, pp. 63-67, doi: 10.1109/IITSI.2010.74.
 Choukri Djellali, Mehdi Adda. A New Data-Driven Deep Learning Model for Pattern Categorization using Fast Independent Component Analysis and Radial Basis Function Network. Taking Social Networks resources as a case, Procedia Computer Science, ISSN 1877-0509, Volume 113, 2017, pp. 97-104, https://doi.org/10.1016/j.procs.2017.08.320.