Work place: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine
Research Interests: Educational Technology, Network Security, Artificial Intelligence, Graph and Image Processing, Computer Science & Information Technology, Communications
Zhengbing Hu, Prof., Deputy Director, International Center of Informatics and Computer Science, Faculty of Applied Mathematics, National Technical University of Ukraine “Kyiv Polytechnic Institute”, Ukraine (2017- ).
Adjunct Professor, School of Computer Science, Hubei University of Technology, China.
D.Sc., National Aviation University, Ukraine (2019-2021, Supervisor of Cooperation, Prof. Felix Yanovsky).
Visiting Professor, National Technical University of Ukraine "KPI", Ukraine, 2017-2018.
Honorary Associate Researcher, Hong Kong University, CS, Hong Kong (2011-2012, Supervisor of Cooperation, Prof. Francis Y.L. Chin).
Associate Professor, School of Educational Information Technology, Central China Normal University, China (2011-2019).
Postdoctor,Huazhong University of Science and Technology, CS, China (2008).
Ph.D., National Technical University of Ukraine "KPI", CS, Ukraine (2006, Supervisor of Ph.D. thesis, Prof. Valerii P. Shyrochyn)
MSc, National Technical University of Ukraine "KPI", CS, Ukraine (2002).
BSc, National Technical University of Ukraine "KPI", CS, Ukraine (2000).
Research of Interests
Computer Science and Technology Applications, Artificial Intelligence, Network Security, Communications, Data Processing, Cloud Computing, Education Technology.
DOI: https://doi.org/10.5815/ijigsp.2024.01.02, Pub. Date: 8 Feb. 2024
Traditional methods of imaging Muller-matrix polarimetry ensure obtaining large arrays of experimental data in the form of 16 Muller-matrix images. Processing and comparative analysis of the received information is quite time-consuming and requires a long time. A new algorithmic polarization-singular approach to the analysis of coordinate distributions of matrix elements (Mueller-matrix maps) of polycrystalline birefringent structure of biological tissues is considered. A Mueller-matrix model for describing the optical anisotropy of biological layers is proposed. Analytical correlations between polarization-singular states of the object field and characteristic values of Mueller-matrix images of birefringence soft tissue objects were found. The proposed algorithmic polarization-singular theory is experimentally verified. Examples of polarization singularities networks of Mueller-matrix maps of histological preparations of real tissues of female reproductive sphere are given. Diagnostic possibilities of the developed polarization-singular algorithms in diagnostics and differentiation of the stages of extragenital endometriosis are illustrated. Another area of biomedical diagnostics has been successfully tested: polarization-singular criteria for forensic Mueller-matrix determination of the age of myocardial injury of the deceased have been defined.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2023.06.04, Pub. Date: 8 Dec. 2023
A new local-topological approach to describe the spatial and angular distributions of polarization parameters of multiply scattered optically anisotropic biological layers of laser fields is considered. A new analytical parameter to describe the local polarization structure of a set of points of coherent object fields, the degree of local depolarization (DLD), is introduced for the first time. The experimental scheme and the technique of measuring coordinate distributions (maps) of DLD The new method of local polarimetry was experimentally tested on histological specimens of biopsy sections of operatively extracted breast tumors. The measured DLD maps were processed using statistical, autocorrelation and scale-sampling approaches. Markers for differential diagnosis of benign (fibroadenoma) and malignant (sarcoma) breast tumors were defined.[...] Read more.
DOI: https://doi.org/10.5815/ijmecs.2023.06.03, Pub. Date: 8 Dec. 2023
The software for clustering students according to their educational achievements using fuzzy logic was developed in Python using the Google Colab cloud service. In the process of analyzing educational data, the problems of Data Mining are solved, since only some characteristics of the educational process are obtained from a large sample of data. Data clustering was performed using the classic K-Means method, which is characterized by simplicity and high speed. Cluster analysis was performed in the space of two features using the machine learning library scikit-learn (Python). The obtained clusters are described by fuzzy triangular membership functions, which allowed to correctly determine the membership of each student to a certain cluster. Creation of fuzzy membership functions is done using the scikit-fuzzy library. The development of fuzzy functions of objects belonging to clusters is also useful for educational purposes, as it allows a better understanding of the principles of using fuzzy logic. As a result of processing test educational data using the developed software, correct results were obtained. It is shown that the use of fuzzy membership functions makes it possible to correctly determine the belonging of students to certain clusters, even if such clusters are not clearly separated. Due to this, it is possible to more accurately determine the recommended level of difficulty of tasks for each student, depending on his previous evaluations.[...] Read more.
DOI: https://doi.org/10.5815/ijmecs.2023.04.06, Pub. Date: 8 Aug. 2023
A generalized model of population migration is proposed. On its basis, models of the set of directions of population flows, the duration of migration, which is determined by its nature in time, type and form of migration, are developed. The model of indicators of actual migration (resettlement) is developed and their groups are divided. The results of population migration are described, characterized by a number of absolute and relative indicators for the purpose of regression analysis of data. To obtain the results of migration, the author takes into account the power of migration flows, which depend on the population of the territories between which the exchange takes place and on their location on the basis of the coefficients of the effectiveness of migration ties and the intensity of migration ties. The types of migration intensity coefficients depending on the properties are formed. The lightgbm algorithm for predicting population migration is implemented in the intelligent geographic information system. The migration forecasting system is also capable of predicting international migration or migration between different countries. The significance of conducting this survey lies in the increasing need for accurate and reliable migration forecasts. With globalization and the connectivity of nations, understanding and predicting migration patterns have become crucial for various domains, including social planning, resource allocation, and economic development. Through extensive experimentation and evaluation, developed migration forecasting system has demonstrated results of human migration based on machine learning algorithms. Performance metrics of migration flow forecasting models are investigated, which made it possible to present the results obtained from the evaluation of these models using various performance indicators, including the mean square error (MSE), root mean square error (RMSE) and R-squared (R2). The MSE and RMSE measure the root mean square difference between predicted and actual values, while the R2 represents the proportion of variance explained by the model.[...] Read more.
DOI: https://doi.org/10.5815/ijmecs.2023.02.06, Pub. Date: 8 Apr. 2023
A method of choosing swarm optimization algorithms and using swarm intelligence for solving a certain class of optimization tasks in industry-specific geographic information systems was developed considering the stationarity characteristic of such systems. The method consists of 8 stages. Classes of swarm algorithms were studied. It is shown which classes of swarm algorithms should be used depending on the stationarity, quasi-stationarity or dynamics of the task solved by an industry geographic information system. An information model of geodata that consists in a formalized combination of their spatial and attributive components, which allows considering the relational, semantic and frame models of knowledge representation of the attributive component, was developed. A method of choosing optimization methods designed to work as part of a decision support system within an industry-specific geographic information system was developed. It includes conceptual information modeling, optimization criteria selection, and objective function analysis and modeling. This method allows choosing the most suitable swarm optimization method (or a set of methods).[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2022.06.01, Pub. Date: 8 Dec. 2022
Currently, the means of semantic segmentation of images, which are based on the use of neural networks, are increasingly being used in computer systems for various purposes. Despite significant progress in this industry, one of the most important unsolved problems is the task of adapting a neural network model to the conditions for selecting an object mask in an image. The features of such a task necessitate determining the type and parameters of convolutional neural networks underlying the encoder and decoder. As a result of the research, an appropriate method has been developed that allows adapting the neural network encoder and decoder to the following conditions of the segmentation problem: image size, number of color channels, acceptable minimum segmentation accuracy, acceptable maximum computational complexity of segmentation, the need to label segments, the need to select several segments, the need to select deformed , displaced and rotated objects, allowable maximum computational complexity of training a neural network model, allowable training time for a neural network model. The main stages of the method are related to the following procedures: determination of the list of image parameters to be registered; formation of training example parameters for the neural network model used for object selection; determination of the type of CNN encoder and decoder that are most effective under the conditions of the given task; formation of a representative educational sample; substantiation of the parameters that should be used to assess the accuracy of selection; calculation of the values of the design parameters of the CNN of the specified type for the encoder and decoder; assessment of the accuracy of selection and, if necessary, refinement of the architecture of the neural network model. The developed method was verified experimentally on examples of semantic segmentation of images containing objects such as a car. The obtained experimental results show that the application of the proposed method allows, avoiding complex long-term experiments, to build a NN that, with a sufficiently short training period, ensures the achievement of image segmentation accuracy of about 0.8, which corresponds to the best systems of similar purpose. It is shown that it is advisable to correlate the ways of further research with the development of approaches to the use of special modules such as ResNet, Inception and mechanisms of the Partial convolution type used in modern types of deep neural networks to increase their computational efficiency in the encoder and decoder.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2020.06.01, Pub. Date: 8 Dec. 2020
Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2020.03.01, Pub. Date: 8 Jun. 2020
Due to the fundamentally different approach underlying quantum cryptography (QC), it has not only become competitive, but also has significant advantages over traditional cryptography methods. Such significant advantage as theoretical and informational stability is achieved through the use of unique quantum particles and the inviolability of quantum physics postulates, in addition it does not depend on the intruder computational capabilities. However, even with such impressive reliability results, QC methods have some disadvantages. For instance, such promising trend as quantum secure direct communication – eliminates the problem of key distribution, since it allows to transmit information by open channel without encrypting it. However, in these protocols, each bit is confidential and should not be compromised, therefore, the requirements for protocol stability are increasing and additional security methods are needed. For a whole class of methods to ensure qutrit QC protocols stability, reliable trit generation method is required. In this paper authors have developed and studied trit generation method and software tool TriGen v.2.0 PRNG. Developed PRNG is important for various practical cryptographic applications (for example, trit QC systems, IoT and Blockchain technologies). Future research can be related with developing fully functional version of testing technique and software tool.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2019.06.03, Pub. Date: 8 Jun. 2019
One of the most important problems of modern cryptocurrency networks is the problem of scaling: advanced cryptocurrencies like Bitcoin can handle around 5 transactions per second. One of the most promising solutions to this problem are second layer payment protocols: payment networks implemented on top of base cryptocurrency network layer, based on the idea of delaying publication of intermediate transactions and using base network only as a finalization layer. Such networks consist of entities that interact with the cryptocurrency system via a payment channel protocol, and can send, receive and forward payments. This paper describes a formal actor-based model of payment channel network and uses it to formulate a modified payment protocol that can be executed in the network without requiring any information about its topology and thus can hide information about financial relations between nodes.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2019.01.07, Pub. Date: 8 Jan. 2019
In this paper, the dynamic behavior of the damping system is analyzed with a two-mass pendulum absorber, the equations of motion of non-linear mechanical systems are built accordingly. AFC equation systems have been identified in the non-linear formulation. To obtain the frequency response, the Ritz averaging method is used. A new numerical method of determining the parameters of optimal tuning two-mass pendulum absorber in the non-linear formulation has been Proposed and implemented.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2018.12.03, Pub. Date: 8 Dec. 2018
As elliptic curve cryptography is one of the popular ways of constructing an encoding and decoding processes, public-key algorithms as its basis provide people a comfortable way of exchanging pieces of encoded information. As the time goes by, a lot of algorithms have emerged, some of them are still in use today; some others are still being developed into new forms. The main point of algorithm innovation is to reduce the number of processed operations during every possible step to find maximum efficiency and highest speed while performing the calculations. This article describes an improved method of the López-Dahab-Montgomery (LD-Montgomery) scalar point multiplication in terms of working with binary elliptic curves. It is shown in the article that the possible improvement lies in reordering the set of operations which is used in LD-Montgomery scalar point multiplication algorithm. The algorithm is used to compute point multiplication results of the curves over binary Galois Fields featuring the following m values: . The article also presents the experimental results based on different scalars.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2018.07.07, Pub. Date: 8 Jul. 2018
The techniques of Dynamic Time Warping (DTW) have shown a great efficiency for clustering time series. On the other hand, it may lead to sufficiently high computational loads when it comes to processing long data sequences. For this reason, it may be appropriate to develop an iterative DTW procedure to be capable of shrinking time sequences. And later on, a clustering approach is proposed for the previously reduced data (by means of the iterative DTW). Experimental modeling tests were performed for proving its efficiency.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2018.05.02, Pub. Date: 8 May 2018
The paper presents the method of medical images similarity estimation based on feature extraction and analysis. The proposed method has been developed for and tested on rat brain histological images, however, it can be applied for other types of medical images, since the general approach is based on consideration of the shape of core components present in a given template image. The proposed method can be used in image analysis tools in a wide range of image-based medical investigations, in particular, in the brain researches.
The theoretical background of the proposed method is presented in the paper. The expert evaluation approach used for assessment of the proposed method effectiveness is explained and illustrated by examples. The method of medical images similarity estimation based on feature analysis consists of several stages: colour model conversion, image normalization, anti-noise filtering, contours search, conversion, and feature analysis. The results of the proposed method algorithmic realization are demonstrated and discussed.
DOI: https://doi.org/10.5815/ijisa.2017.12.05, Pub. Date: 8 Dec. 2017
The paper analyzes modern methods of modeling impacts on information systems, which made it possible to determine the most effective approaches and use them to optimize the parameters of security systems. And also as a method to optimize data security, taking in the security settings account (number of security measures, the type of security subsystems, safety resources and total cost information) allows to determine the optimal behavior in the “impact-security”. Also developed special software that allowed to verify the proposed method.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.11.02, Pub. Date: 8 Nov. 2017
Temporal clustering (segmentation) for video streams has revolutionized the world of multimedia. Detected shots are principle units of consecutive sets of images for semantic structuring. Evaluation of time series similarity is based on Dynamic Time Warping and provides various solutions for Content Based Video Information Retrieval. Time series clustering in terms of the iterative Dynamic Time Warping and time series reduction are discussed in the paper.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.10.07, Pub. Date: 8 Oct. 2017
The paper is dedicated to the problem of efficiency increasing in case of applying multilayer perceptron in context of parameters estimation for technical systems. It is shown that the increase of efficiency is possible by adaptation of structure of the multilayer perceptron to the problem specification set. It is revealed that the structure adaptation lies in the determination the following parameters:
1. The number of hidden neuron layers;
2. The number of neurons within each layer.
In terms of the paper, we introduce mathematical apparatus that allows conducting the structure adaptation for minimization of the relative error of the neuro-network model generalization. A numerical experiment to demonstrate efficiency of the mathematical apparatus was developed and described in terms of the article. Further research in this sphere lies in the development of a method for calculation of optimum relationship between the number of the hidden neuron layers and the number of hidden neurons within each layer.
DOI: https://doi.org/10.5815/ijisa.2017.09.04, Pub. Date: 8 Sep. 2017
An article introduces a modified architecture of the neo-fuzzy neuron, also known as a "multidimensional extended neo-fuzzy neuron" (MENFN), for the face recognition problems. This architecture is marked by enhanced approximating capabilities. A characteristic property of the MENFN is also its computational plainness in comparison with neuro-fuzzy systems and neural networks. These qualities of the proposed system make it effectual for solving the image recognition problems. An introduced MENFN’s adaptive learning algorithm allows solving classification problems in a real-time fashion.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.08.04, Pub. Date: 8 Aug. 2017
The proposed method of graphical data protection is a combined crypto-steganographic method. It is based on a bit values transformation according to both a certain Boolean function and a specific scheme of correspondence between MSB and LSB. The scheme of correspondence is considered as a secret key. The proposed method should be used for protection of large amounts of secret graphical data.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.07.05, Pub. Date: 8 Jul. 2017
In this paper, we present the development of a decentralized mechanism for the resources control in a distributed computer system based on a network-centric approach. Intially, the network-centric approach was proposed for the military purposes, and now its principles are successfully introduced in the other applications of the complex systems control. Due to the features of control systems based on the network-centric approach, namely adding the horizontal links between components of the same level, adding the general knowledge control in the system, etc., there are new properties and characteristics. The concept of implementing of resource control module for a distributed computer system based on a network-centric approach is proposed in this study. We, basing on this concept, realized the resource control module and perform the analysis of its operation parameters in compare with resource control modules implemented on the hierarchical approach and on the decentralized approach with the creation of the communities of the computing resources. The experiments showed the advantages of the proposed mechanism for resources control in compare with the control mechanisms based on the hierarchical and decentralized approaches.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2017.06.04, Pub. Date: 8 Jun. 2017
In this paper the method of network-centric monitoring of cyberincidents was developed, which is based on network-centric concept and implements in 8 stages. This method allows to determine the most important objects for protection, and predict the category of cyberincidents, which will arise as a result of cyberattack, and their level of criticality.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.06.03, Pub. Date: 8 Jun. 2017
An adaptive neural system which solves a problem of clustering data with missing values in an online mode with a permanent correction of restorable table elements and clusters’ centroids is proposed in this article. The introduced neural system is characterized by both a high speed and a simple numerical implementation. It can process information in a real-time mode.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.05.07, Pub. Date: 8 May 2017
Fuzzy clustering procedures for categorical data are proposed in the paper. Most of well-known conventional clustering methods face certain difficulties while processing this sort of data because a notion of similarity is missing in these data. A detailed description of a possibilistic fuzzy clustering method based on frequency-based cluster prototypes and dissimilarity measures for categorical data is given.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2017.04.02, Pub. Date: 8 Apr. 2017
Continuous growth of using the information technologies in the modern world causes gradual accretion amounts of data that are circulating in information and telecommunication system. That creates an urgent need for the establishment of large-scale data storage and accumulation areas and generates many new threats that are not easy to detect. Task of accumulation and storing is solved by datacenters – tools, which are able to provide and automate any business process. For now, almost all service providers use quite promising technology of building datacenters – Cloud Computing, which has some advantages over its traditional opponents. Nevertheless, problem of the provider’s data protection is so huge that risk to lose all your data in the “cloud” is almost constant. It causes the necessity of processing great amounts of data in real-time and quick notification of possible threats. Therefore, it is reasonable to implement in data centers’ network an intellectual system, which will be able to process large datasets and detect possible breaches. Usual threat detection methods are based on signature methods, the main idea of which is comparing the incoming traffic with databases of known threats. However, such methods are becoming ineffective, when the threat is new and it has not been added to database yet. In that case, it is more preferable to use intellectual methods that are capable of tracking any unusual activity in specific system – anomaly detection methods. However, signature module will detect known threats faster, so it is logical to include it in the system too. Big Data methods and tools (e.g. distributed file system, parallel computing on many servers) will provide the speed of such system and allow to process data dynamically. This paper is aimed to demonstrate developed anomaly detection system in secure cloud computing environment, show its theoretical description and conduct appropriate simulation. The result demonstrate that the developed system provides the high percentage (>90%) of anomaly detection in secure cloud computing environment.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.03.03, Pub. Date: 8 Mar. 2017
We analyzed the dynamic behavior of the damping system with a two-mass damper pendulum. The equations of motion of nonlinear systems were built. AFC equation systems have been identified in the linear formulation. Proposed and implemented a new numerical method of determining the optimum parameters of optimal settings two-mass damper.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2017.03.02, Pub. Date: 8 Mar. 2017
Multi-scale segmentation is one of the most important methods for object-oriented classification. The selection of the optimal scale segmentation parameters has become difficult and hot in current research certainly. This paper takes aerial images and IKONOS images as the experimental objects and proposes an automatic selection method of optimal segmentation scale for high resolution remote sensing image based on multi-scale MRF model. This method introduces the region feature into the object, and obtains the hierarchical structure of the image from the bottom up through the message propagation between the objects. Finally, the optimal segmentation scale is obtained automatically by computing the marginal probabilities of the objects in each scale image. Experimental results show that this method can effectively avoid the subjectivity and sidedness of the segmentation process, and improve the accuracy and efficiency of high resolution segmentation.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.02.01, Pub. Date: 8 Feb. 2017
A task of clustering data given on the ordinal scale under conditions of overlapping clusters has been considered. It’s proposed to use an approach based on membership and likelihood functions sharing. A number of performed experiments proved effectiveness of the proposed method. The proposed method is characterized by robustness to outliers due to a way of ordering values while constructing membership functions.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2017.01.07, Pub. Date: 8 Jan. 2017
A fuzzy clustering algorithm for multidimensional data is proposed in this article. The data is described by vectors whose components are linguistic variables defined in an ordinal scale. The obtained results confirm the efficiency of the proposed approach.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2016.12.07, Pub. Date: 8 Dec. 2016
The article deals with the issues of the security of distributed and scalable computer systems based on the risk-based approach. The main existing methods for predicting the consequences of the dangerous actions of the intrusion agents are described. There is shown a generalized structural scheme of job manager in the context of a risk-based approach. Suggested analytical assessments for the security risk level in the distributed computer systems allow performing the critical time values forecast for the situation analysis and decision-making for the current configuration of a distributed computer system. These assessments are based on the number of used nodes and data links channels, the number of active security and monitoring mechanisms at the current period, as well as on the intensity of the security threats realization and on the activation intensity of the intrusion prevention mechanisms. The proposed comprehensive analytical risks assessments allow analyzing the dynamics of intrusions processes, the dynamics of the security level recovery and the corresponding dynamics of the risks level in the distributed computer system.[...] Read more.
DOI: https://doi.org/10.5815/ijigsp.2016.12.03, Pub. Date: 8 Dec. 2016
Remote sensing textual image classification technology has been the hottest topic in the filed of remote sensing. Texture is the most helpful symbol for image classification. In common, there are complex terrain types and multiple texture features are extracted for classification, in addition; there is noise in the remote sensing images and the single classifier is hard to obtain the optimal classification results. Integration of multiple classifiers is able to make good use of the characteristics of different classifiers and improve the classification accuracy in the largest extent. In the paper, based on the diversity measurement of the base classifiers, J48 classifier, IBk classifier, sequential minimal optimization (SMO) classifier, Naive Bayes classifier and multilayer perceptron (MLP) classifier are selected for ensemble learning. In order to evaluate the influence of our proposed method, our approach is compared with the five base classifiers through calculating the average classification accuracy. Experiments on five UCI data sets and remote sensing image data sets are performed to testify the effectiveness of the proposed method.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2016.11.02, Pub. Date: 8 Nov. 2016
In this article an investigation into search operations for the multiplicative inverse in the ring of integers modulo m for Error Control Coding tasks and for data security is shown. The classification of the searching operation of the multiplicative inverse in the ring of integers modulo m is provided. The best values of parameters for Joye-Paillier method and Lehmer algorithm were also found. The improved Bradley modification for the extended Euclidean algorithm is also offered, which gives the operating speed improvement for 10-15%. The integrated experimental research of basic classes of searching methods for multiplicative inverse in the ring of integers modulo m is conducted for the first time and the analytical formulas for these calculations of random access memory necessary space when operated at k-ary RS-algorithms and their modifications are shown.[...] Read more.
DOI: https://doi.org/10.5815/ijitcs.2016.10.01, Pub. Date: 8 Oct. 2016
An evolving weighted neuro-neo-fuzzy-ANARX model and its learning procedures are introduced in the article. This system is basically used for time series forecasting. It's based on neo-fuzzy elements. This system may be considered as a pool of elements that process data in a parallel manner. The proposed evolving system may provide online processing data streams.[...] Read more.
DOI: https://doi.org/10.5815/ijisa.2016.09.01, Pub. Date: 8 Sep. 2016
Neo-fuzzy elements are used as nodes for an evolving cascade system. The proposed system can tune both its parameters and architecture in an online mode. It can be used for solving a wide range of Data Mining tasks (namely time series forecasting). The evolving cascade system with neo-fuzzy nodes can process rather large data sets with high speed and effectiveness.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2016.06.01, Pub. Date: 8 Jun. 2016
In the paper is described the simulating process for the situations analysis and the decisions making about the functioning of the Distributed Computer Systems (DCS) nodes on the basis of special stochastic RA-networks mechanism. There are presented the main problems in the estimations of the DCS nodes functioning parameters and there are shown that the suggested RA-networks mechanism allows simulate the data flow with the different, including the significantly different intensities, what is particularly important in for the situations analysis and the decisions making in the DCS nodes parameters dynamics control.[...] Read more.
DOI: https://doi.org/10.5815/ijcnis.2015.01.03, Pub. Date: 8 Dec. 2014
WSNs is usually deployed in opening wireless environment, its data is easy to be intercepted by attackers. It is necessary to adopt some encryption measurements to protect data of WSNs. But the battery capacity, CPU performance and RAM capacity of WSNs sensors are all limited, the complex encryption algorithm is not fitted for them. The paper proposed a light-level symmetrical encryption algorithm: LWSEA, which adopt minor encryption rounds, shorter data packet and simplified scrambling function. So the calculation cost of LWSEA is very low. We also adopt longer-bit Key and circular interpolation method to produce Child-Key, which raised the security of LWSEA. The experiments demonstrate that the LWSEA possess better “avalanche effect” and data confusion degree, furthermore, its calculation speed is far faster than DES, but its resource cost is very low. Those excellent performances make LWSEA is much suited for resource-restrained WSNs.[...] Read more.
Subscribe to receive issue release notifications and newsletters from MECS Press journals