ISSN: 2074-9090 (Print)
ISSN: 2074-9104 (Online)
DOI: https://doi.org/10.5815/ijcnis
Website: https://www.mecs-press.org/ijcnis
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 132
IJCNIS is committed to bridge the theory and practice of computer network and information security. From innovative ideas to specific algorithms and full system implementations, IJCNIS publishes original, peer-reviewed, and high quality articles in the areas of computer network and information security. IJCNIS is well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer network, information security, and their applications.
IJCNIS has been abstracted or indexed by several world class databases: Scopus, SCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJCNIS Vol. 16, No. 4, Aug. 2024
REGULAR PAPERS
Multi-access Edge Computing optimizes computation in proximity to smart mobile devices, addressing the limitations of devices with insufficient capabilities. In scenarios featuring multiple compute-intensive and delay-sensitive applications, computation offloading becomes essential. The objective of this research is to enhance user experience, minimize service time, and balance workloads while optimizing computation offloading and resource utilization. In this study, we introduce dynamic computation offloading algorithms that concurrently minimize service time and maximize the quality of experience. These algorithms take into account task and resource characteristics to determine the optimal execution location based on evaluated metrics. To assess the positive impact of the proposed algorithms, we employed the Edgecloudsim simulator, offering a realistic assessment of a Multi-access Edge Computing system. Simulation results showcase the superiority of our dynamic computation offloading algorithm compared to alternatives, achieving enhanced quality of experience and minimal service time. The findings underscore the effectiveness of the proposed algorithm and its potential to enhance mobile application performance. The comprehensive evaluation provides insights into the robustness and practical applicability of the proposed approach, positioning it as a valuable solution in the context of MEC networks. This research contributes to the ongoing efforts in advancing computation offloading strategies for improved performance in edge computing environments.
[...] Read more.Cybersecurity has received significant attention globally, with the ever-continuing expansion of internet usage, due to growing trends and adverse impacts of cybercrimes, which include disrupting businesses, corrupting or altering sensitive data, stealing or exposing information, and illegally accessing a computer network. As a popular way, different kinds of firewalls, antivirus systems, and Intrusion Detection Systems (IDS) have been introduced to protect a network from such attacks. Recently, Machine Learning (ML), including Deep Learning (DL) based autonomous systems, have been state-of-the-art in cyber security, along with their drastic growth and superior performance. This study aims to develop a novel IDS system that gives more attention to classifying attack cases correctly and categorizes attacks into subclass levels by proposing a two-step process with a cascaded framework. The proposed framework recognizes the attacks using one ML model and classifies them into subclass levels using the other ML model in successive operations. The most challenging part is to train both models with unbalanced cases of attacks and non-attacks in the datasets, which is overcome by proposing a data augmentation technique. Precisely, limited attack samples of the dataset are augmented in the training set to learn the attack cases properly. Finally, the proposed framework is implemented with NN, the most popular ML model, and evaluated with the NSL-KDD dataset by conducting a rigorous analysis of each subclass emphasizing the major attack class. The proficiency of the proposed cascaded approach with data augmentation is compared with the other three models: the cascaded model without data augmentation and the standard single NN model with and without the data augmentation technique. Experimental results on the NSL-KDD dataset have revealed the proposed method as an effective IDS system and outperformed existing state-of-the-art ML models.
[...] Read more.This article explores the design, modeling, and experimental validation of passive antenna arrays (AAs) tailored for unmanned aerial vehicle (UAV) communication systems. Focusing on the technical composition and functionalities of various types of passive antenna arrays, the study delves into different antenna elements that comprise these arrays, discussing their integration into comprehensive systems. Through rigorous modeling aimed at predicting performance in diverse operational conditions and backed by experimental studies, the paper provides practical insights for the development and optimization of AAs. These passive systems leverage the collective strength of multiple antennas to form directed beams, enhancing signal clarity and reducing interference, thereby supporting robust communication links essential for UAV operations.
[...] Read more.Information security in cloud computing refers to the protection of data items such as text, images, audios and video files. In the modern era, data size is increasing rapidly from gigabytes to terabytes or even petabytes, due to development of a significant amount of real-time data. The majority of data is stored in cloud computing environments and is sent or received over the internet. Due to the fact that cloud computing offers internet-based services, there are various attackers and illegal users over the internet who are consistently trying to gain access to user’s private data without the appropriate permission. Hackers frequently replace any fake data with actual data. As a result, data security has recently generated a lot of attention. To provide access rights of files, the cloud computing is only option for authorized user. To overcome from security threats, a security model is proposed for cloud computing to enhance the security of cloud data through the fingerprint authentication for access control and genetic algorithm is also used for encryption/decryption of cloud data. To search desired data from cloud, fuzzy encrypted keyword search technique is used. The encrypted keyword is stored in cloud storage using SHA256 hashing techniques. The proposed model minimizes the computation time and maximizes the security threats over the cloud. The computed results are presented in the form of figures and tables.
[...] Read more.This work focuses on developing a universal onboard neural network system for restoring information when helicopter turboshaft engine sensors fail. A mathematical task was formulated to determine the occurrence and location of these sensor failures using a multi-class Bayesian classification model that incorporates prior knowledge and updates probabilities with new data. The Bayesian approach was employed for identifying and localizing sensor failures, utilizing a Bayesian neural network with a 4–6–3 structure as the core of the developed system. A training algorithm for the Bayesian neural network was created, which estimates the prior distribution of network parameters through variational approximation, maximizes the evidence lower bound of direct likelihood instead, and updates parameters by calculating gradients of the log-likelihood and evidence lower bound, while adding regularization terms for warnings, distributions, and uncertainty estimates to interpret results. This approach ensures balanced data handling, effective training (achieving nearly 100% accuracy on both training and validation sets), and improved model understanding (with training losses not exceeding 2.5%). An example is provided that demonstrates solving the information restoration task in the event of a gas-generator rotor r.p.m. sensor failure in the TV3-117 helicopter turboshaft engine. The developed onboard neural network system implementing feasibility on a helicopter using the neuro-processor Intel Neural Compute Stick 2 has been analytically proven.
[...] Read more.Cloud-fog computing frameworks are innovative frameworks that have been designed to improve the present Internet of Things (IoT) infrastructures. The major limitation for IoT applications is the availability of ongoing energy sources for fog computing servers because transmitting the enormous amount of data generated by IoT devices will increase network bandwidth overhead and slow down the responsive time. Therefore, in this paper, the Butterfly Spotted Hyena Optimization algorithm (BSHOA) is proposed to find an alternative energy-aware task scheduling technique for IoT requests in a cloud-fog environment. In this hybrid BSHOA algorithm, the Butterfly optimization algorithm (BOA) is combined with Spotted Hyena Optimization (SHO) to enhance the global and local search behavior of BOA in the process of finding the optimal solution for the problem under consideration. To show the applicability and efficiency of the presented BSHOA approach, experiments will be done on real workloads taken from the Parallel Workload Archive comprising NASA Ames iPSC/860 and HP2CN (High-Performance Computing Center North) workloads. The investigation findings indicate that BSHOA has a strong capacity for dealing with the task scheduling issue and outperforms other approaches in terms of performance parameters including throughput, energy usage, and makespan time.
[...] Read more.Image encryption is an efficient mechanism by which digital images can be secured during transmission over communication in which key sequence generation plays a vital role. The proposed system consists of stages such as the generation of four chaotic maps, conversion of generated maps to binary vectors, rotation of Linear Feedback Shift Register (LFSR), and selection of generated binary chaotic key sequences from the generated key pool. The novelty of this implementation is to generate binary sequences by selecting from all four chaotic maps viz., Tent, Logistic, Henon, and Arnold Cat map (ACM). LFSR selects chaotic maps to produce random key sequences. Five primitive polynomials of degrees 5, 6, 7, and 8 are considered for the generation of key sequences. Each primitive polynomial generates 61 binary key sequences stored in a binary key pool. All 61 binary key sequences generated are submitted for the NIST and FIPS tests. Performance analysis is carried out of the generated binary key sequences. From the obtained results, it can be concluded that the binary key sequences are random and unpredictable and have a large key space based on the individual and combination of key sequences. Also, the generated binary key sequences can be efficiently utilized for the encryption of digital images.
[...] Read more.Augmented Reality (AR) and Virtual Reality (VR) are innovative technologies that are experiencing a widespread recognition. These technologies possess the capability to transform and redefine our interactions with the surrounding environment. However, as these technologies spread, they also introduce new security challenges. In this paper, we discuss the security challenges posed by Augmented reality and Virtual Reality, and propose a Machine Learning-based approach to address these challenges. We also discuss how Machine Learning can be used to detect and prevent attacks in Augmented reality and Virtual Reality. By leveraging the power of Machine Learning algorithms, we aim to bolster the security defences of Augmented reality and Virtual Reality systems. To accomplish this, we have conducted a comprehensive evaluation of various Machine Learning algorithms, meticulously analysing their performance and efficacy in enhancing security. Our results show that Machine Learning can be an effective way to improve the security of Augmented reality and virtual reality systems.
[...] Read more.Multi-access edge computing has the ability to provide high bandwidth, and low latency, ensuring high efficiency in performing network operations and thus, it seems to be promising in the technical field. MEC allows processing and analysis of data at the network edges but it has finite number of resources which can be used. To overcome this restriction, a scheduling algorithm can be used by an orchestrator to deliver high quality services by choosing when and where each process should be executed. The scheduling algorithm must meet the expected outcome by utilizing lesser number of resources. This paper provides a scheduling algorithm containing two cooperative levels with an orchestrator layer acting at the center. The first level schedules local processes on the MEC servers and the next layer represents the orchestrator and allocates processes to nearby stations or cloud. Depending on latency and throughput, the processes are executed according to their priority. A resource optimization algorithm has also been proposed for extra performance. This offers a cost-efficient solution which provides good service availability. The proposed algorithm has a balanced wait time (Avg) and blocking percentage (Avg) of 2.37ms and 0.4 respectively. The blocking percentage is 1.65 times better than Shortest Job First Scheduling (SJFS) and 1.3 times better than Earliest Deadline First Scheduling (EDFS). The optimization algorithm can work on many kinds of network traffic models such as uniformly distributed and base stations with unbalanced loads.
[...] Read more.The number of new cybersecurity threats and opportunities is increasing over time, as well as the amount of information that is generated, processed, stored and transmitted using ICTs. Particularly sensitive are the objects of critical infrastructure of the state, which include the mining industry, transport, telecommunications, the banking system, etc. From these positions, the development of systems for detecting attacks and identifying intruders (including the critical infrastructure of the state) is an important and relevant scientific task, which determined the tasks of this article. The paper identifies the main factors influencing the choice of the most effective method for calculating the importance coefficients to increase the objectivity and simplicity of expert assessment of security events in cyberspace. Also, a methodology for conducting an experimental study was developed, in which the goals and objectives of the experiment, input and output parameters, the hypothesis and research criteria, the sufficiency of experimental objects and the sequence of necessary actions were determined. The conducted experimental study confirmed the adequacy of the models proposed in the work, as well as the ability of the method and system created on their basis to detect targeted attacks and identify intruders in cyberspace at an early stage, which is not included in the functionality of modern intrusion detection and prevention systems.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E based model of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.
[...] Read more.For solving the crimes committed on digital materials, they have to be copied. An evidence must be copied properly in valid methods that provide legal availability. Otherwise, the material cannot be used as an evidence. Image acquisition of the materials from the crime scene by using the proper hardware and software tools makes the obtained data legal evidence. Choosing the proper format and verification function when image acquisition affects the steps in the research process. For this purpose, investigators use hardware and software tools. Hardware tools assure the integrity and trueness of the image through write-protected method. As for software tools, they provide usage of certain write-protect hardware tools or acquisition of the disks that are directly linked to a computer. Image acquisition through write-protect hardware tools assures them the feature of forensic copy. Image acquisition only through software tools do not ensure the forensic copy feature. During the image acquisition process, different formats like E01, AFF, DD can be chosen. In order to provide the integrity and trueness of the copy, hash values have to be calculated using verification functions like SHA and MD series. In this study, image acquisition process through hardware-software are shown. Hardware acquisition of a 200 GB capacity hard disk is made through Tableau TD3 and CRU Ditto. The images of the same storage are taken through Tableau, CRU and RTX USB bridge and through FTK imager and Forensic Imager; then comparative performance assessment results are presented.
[...] Read more.Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Present research work describes advancement in standard routing protocol AODV for mobile ad-hoc networks. Our mechanism sets up multiple optimal paths with the criteria of bandwidth and delay to store multiple optimal paths in the network. At time of link failure, it will switch to next available path. We have used the information that we get in the RREQ packet and also send RREP packet to more than one path, to set up multiple paths, It reduces overhead of local route discovery at the time of link failure and because of this End to End Delay and Drop Ratio decreases. The main feature of our mechanism is its simplicity and improved efficiency. This evaluates through simulations the performance of the AODV routing protocol including our scheme and we compare it with HLSMPRA (Hot Link Split Multi-Path Routing Algorithm) Algorithm. Indeed, our scheme reduces routing load of network, end to end delay, packet drop ratio, and route error sent. The simulations have been performed using network simulator OPNET. The network simulator OPNET is discrete event simulation software for network simulations which means it simulates events not only sending and receiving packets but also forwarding and dropping packets. This modified algorithm has improved efficiency, with more reliability than Previous Algorithm.
[...] Read more.The creation and developing of a wireless network communication that is fast, secure, dependable, and cost-effective enough to suit the needs of the modern world is a difficult undertaking. Channel coding schemes must be chosen carefully to ensure timely and error-free data transfer in a noisy and fading channel. To ensure that the data received matches the data transmitted, channel coding is an essential part of the communication system's architecture. NR LDPC (New Radio Low Density Parity Check) code has been recommended for the fifth-generation (5G) to achieve the need for more internet traffic capacity in mobile communications and to provide both high coding gain and low energy consumption. This research presents NR-LDPC for data transmission over two different multipath fading channel models, such as Nakagami-m and Rayleigh in AWGN. The BER performance of the NR-LDPC code using two kinds of rate-compatible base graphs has been examined for the QAM-OFDM (Quadrature Amplitude Modulation-Orthogonal Frequency Division Multiplexing) system and compared to the uncoded QAM-OFDM system. The BER performance obtained via Monte Carlo simulation demonstrates that the LDPC works efficiently with two different kinds of channel models: those that do not fade and those that fade and achieves significant BER improvements with high coding gain. It makes sense to use LDPC codes in 5G because they are more efficient for long data transmissions, and the key to a good code is an effective decoding algorithm. The results demonstrated a coding gain improvement of up to 15 dB at 10-3 BER.
[...] Read more.Present research work describes advancement in standard routing protocol AODV for mobile ad-hoc networks. Our mechanism sets up multiple optimal paths with the criteria of bandwidth and delay to store multiple optimal paths in the network. At time of link failure, it will switch to next available path. We have used the information that we get in the RREQ packet and also send RREP packet to more than one path, to set up multiple paths, It reduces overhead of local route discovery at the time of link failure and because of this End to End Delay and Drop Ratio decreases. The main feature of our mechanism is its simplicity and improved efficiency. This evaluates through simulations the performance of the AODV routing protocol including our scheme and we compare it with HLSMPRA (Hot Link Split Multi-Path Routing Algorithm) Algorithm. Indeed, our scheme reduces routing load of network, end to end delay, packet drop ratio, and route error sent. The simulations have been performed using network simulator OPNET. The network simulator OPNET is discrete event simulation software for network simulations which means it simulates events not only sending and receiving packets but also forwarding and dropping packets. This modified algorithm has improved efficiency, with more reliability than Previous Algorithm.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.An important task of designing complex computer systems is to ensure high reliability. Many authors investigate this problem and solve it in various ways. Most known methods are based on the use of natural or artificially introduced redundancy. This redundancy can be used passively and/or actively with (or without) restructuring of the computer system. This article explores new technologies for improving fault tolerance through the use of natural and artificially introduced redundancy of the applied number system. We consider a non-positional number system in residual classes and use the following properties: independence, equality, and small capacity of residues that define a non-positional code structure. This allows you to: parallelize arithmetic calculations at the level of decomposition of the remainders of numbers; implement spatial spacing of data elements with the possibility of their subsequent asynchronous independent processing; perform tabular execution of arithmetic operations of the base set and polynomial functions with single-cycle sampling of the result of a modular operation. Using specific examples, we present the calculation and comparative analysis of the reliability of computer systems. The conducted studies have shown that the use of non-positional code structures in the system of residual classes provides high reliability. In addition, with an increase in the bit grid of computing devices, the efficiency of using the system of residual classes increases. Our studies show that in order to increase reliability, it is advisable to reserve small nodes and blocks of a complex system, since the failure rate of individual elements is always less than the failure rate of the entire computer system.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.
[...] Read more.Information security is an important part of the current interactive world. It is very much essential for the end-user to preserve the confidentiality and integrity of their sensitive data. As such, information encoding is significant to defend against access from the non-authorized user. This paper is presented with an aim to build a system with a fusion of Cryptography and Steganography methods for scrambling the input image and embed into a carrier media by enhancing the security level. Elliptic Curve Cryptography (ECC) is helpful in achieving high security with a smaller key size. In this paper, ECC with modification is used to encrypt and decrypt the input image. Carrier media is transformed into frequency bands by utilizing Discrete Wavelet Transform (DWT). The encrypted hash of the input is hidden in high-frequency bands of carrier media by the process of Least-Significant-Bit (LSB). This approach is successful to achieve data confidentiality along with data integrity. Data integrity is verified by using SHA-256. Simulation outcomes of this method have been analyzed by measuring performance metrics. This method enhances the security of images obtained with 82.7528db of PSNR, 0.0012 of MSE, and SSIM as 1 compared to other existing scrambling methods.
[...] Read more.Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.
[...] Read more.In the cloud computing platform, DDoS (Distributed Denial-of-service) attacks are one of the most commonly occurring attacks. Research studies on DDoS mitigation rarely considered the data shift problem in real-time implementation. Concurrently, existing studies have attempted to perform DDoS attack detection. Nevertheless, they have been deficient regarding the detection rate. Hence, the proposed study proposes a novel DDoS mitigation scheme using LCDT-M (Log-Cluster DDoS Tree Mitigation) framework for the hybrid cloud environment. LCDT-M detects and mitigates DDoS attacks in the Software-Defined Network (SDN) based cloud environment. The LCDT-M comprises three algorithms: GFS (Greedy Feature Selection), TLMC (Two Log Mean Clustering), and DM (Detection-Mitigation) based on DT (Decision Tree) to optimize the detection of DDoS attacks along with mitigation in SDN. The study simulated the defined cloud environment and considered the data shift problem during the real-time implementation. As a result, the proposed architecture achieved an accuracy of about 99.83%, confirming its superior performance.
[...] Read more.