Cover page and Table of Contents: PDF (size: 602KB)
Full Text (PDF, 602KB), PP.68-78
Views: 0 Downloads: 0
Artificial Neural Networks, Unsupervised Learning, Self-learning Systems, General Learning, Bayesian Inference
In this study the authors investigated the connections between the training processes of unsupervised neural network models with self-encoding and regeneration and the information structure in the representations created by such models. We propose theoretical arguments leading to conclusions, confirmed by previously published experimental results that unsupervised representations obtained under certain constraints in training compliant with Bayesian inference principle, favor configurations with better categorization of hidden concepts in the observable data. The results provide an important connection between training of unsupervised machine learning models and the structure of representations created by them and can be used in developing new methods and approaches in self-learning as well as provide insights into common principles underlying the emergence of intelligence in machine and biologic systems.
Serge Dolgikh, " Categorization in Unsupervised Generative Self-learning Systems", International Journal of Modern Education and Computer Science(IJMECS), Vol.13, No.3, pp. 68-78, 2021.DOI: 10.5815/ijmecs.2021.03.06
Q.V. Le, M.A. Ranzato, R. Monga et al., “Building high-level features using large scale unsupervised learning”, arxXiv:1112.6209, 2012.
A. Banino, C. Barry, D. Kumaran, “Vector-based navigation using grid-like representations in artificial agents”, Nature, vol 557, pp. 429–433, 2018.
S. Dolgikh, “Categorized Representations and General Learning”, Proceedings of the 10th International Conference on Theory and Application of Soft Computing, Computing with Words and Perceptions (ICSCCW-2019), vol.1095, pp.93-100, 2019.
K. Friston, “A free energy principle for biological systems”, Entropy, vol.14, pp.2100 – 2121, 2012.
M.A. Ranzato, Y-L. Boureau, S. Chopra, Y. LeCun, “A uniﬁed energy-based framework for unsupervised learning”, Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, vol.2, pp. 371-379, 2007.
C.M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
N. Tishby, F.C. Pereira. W. Bialek, “The Information Bottleneck method”, arXiv:physics/0004057, 2000.
S. Mandt, M.D. Hoffman, D.M. Blei, “Stochastic gradient descent as approximate Bayesian inference”, Journal of Machine Learning Research, vol.18, pp. 1 – 35, 2017.
A. Fischer, C. Igel, “Training restricted Boltzmann machines: an introduction, Pattern Recognition, vol.47, pp. 25 – 39, 2014.
G.E. Hinton, S. Osindero, Y.W. Teh, “A fast learning algorithm for deep belief nets”, Neural Computation, vol.18, no.7, pp. 1527 – 1554, 2006.
Y. Bengio, “Learning deep architectures for AI”, Foundations and Trends in Machine Learning, vol.2, no.1, pp. 1–127, 2009.
N. Zeng, H. Zhang, Song B., W. Liu, Y. Li et al., “Facial expression recognition via learning deep sparse autoencoders”, Neurocomputing, vol.273, pp. 643–649, 2018.
Y.M. Elbarawy, I.Neveen, R. Ghali, R.S. El-Sayed, " Facial expressions recognition in thermal images based on deep learning techniques", International Journal of Image, Graphics and Signal Processing(IJIGSP), vol.11, no.10, pp. 1-7, 2019.
J. Spall, Introduction to Stochastic Search and Optimization, Wiley, 2003.
D.E. Rumelhart, G.E. Hinton, R.J. Williams, “Learning representations by back-propagating errors”, Nature, vol 323 (6088), pp. 533–536, 1986.
K. Zimebi, N. Souissi, K. Tikto, “Selecting qualitative features of driver behavior via Pareto analysis”, International Journal of Modern Education and Computer Science (IJMECS), vol.10, no.10, pp. 1-10, 2018.
M. Ribeiro, A.E. Lazzaretti A., H.S. Lopes, “A study of deep convolutional autoencoders for anomaly detection in videos”, Pattern Recognition Letters, vol.105, pp. 13-22, 2018.
X. Wang, W. Gu, "The stability of memory rules associative with the mathematical thinking core", IJMECS, vol.3, no.1, pp.24-30, 2011.
K. Hornik, M. Stinchcombe, H. White, “Multilayer feedforward neural networks are universal approximators”, Neural Networks, vol.2 no.5, pp. 359-366, 1989.
X. Zhou, M. Belkin, Semi-supervised learning, In: Academic Press Library in Signal Processing, Elsevier, vol.1, pp. 1239 – 1269, 2014.
D. Karpenko, P. Prystavka, O. Cholyshkina: “Automated object recognition system based on aerial photography”, submitted for publication, February 2020.
K. Fukunaga, L.D. Hostetler: “The estimation of the gradient of a density function, with applications in pattern recognition”, IEEE Transactions on Information Theory, vol.21, no.1, pp. 32 – 40, 1975.
D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick: “Neuroscience inspired Artificial Intelligence”, Neuron, vol.95, pp. 245-258, 2017.
T. Yoshida, K. Ohki, “Natural images are reliably represented by sparse and variable populations of neurons in visual cortex”, Nature Communications, vol.11, p.872, 2020.
X. Bao, E. Gjorgieva, L.K. Shanahan et al., “Grid-like neural representations support olfactory navigation of a two-dimensional odor space”, Neuron vol. 102 (5), pp. 1066 – 1075, May 2019.
S. Dolgikh, “Spontaneous concept learning with deep autoencoder”, International Journal of Computer Intelligence Systems (IJCIS), vol.12, no.1, pp. 1-12, 2018.