ISSN: 2074-9090 (Print)
ISSN: 2074-9104 (Online)
DOI: https://doi.org/10.5815/ijcnis
Website: https://www.mecs-press.org/ijcnis
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 142
IJCNIS is committed to bridge the theory and practice of computer network and information security. From innovative ideas to specific algorithms and full system implementations, IJCNIS publishes original, peer-reviewed, and high quality articles in the areas of computer network and information security. IJCNIS is well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer network, information security, and their applications.
IJCNIS has been abstracted or indexed by several world class databases: Scopus, SCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJCNIS Vol. 18, No. 2, Apr. 2026
REGULAR PAPERS
The article describes a model of cloud-native AI pipelines designed for continuous optimization of computing infrastructure and real-time anomaly detection. The developed model combines modern approaches to observability, machine learning (ML), and auto-scaling based on load forecasting. The methodology is based on the use of LSTM models, autoencoders, and convolutional neural networks (CNN) integrated into Kubernetes environment with support for Prometheus, Kafka, and Grafana. Load changes are simulated, and the system's response to critical events is evaluated. The results demonstrate a significant improvement in anomaly detection accuracy (up to 93%) and resource efficiency (up to 26% cost reduction compared to traditional approaches). The proposed model can be used in AIOps systems that require a high level of automation and reliability.
[...] Read more.Mobile Edge Computing (MEC) mitigates cloud computing systems’ latency and limited responsiveness by offloading computationally intensive tasks from user devices to nearby Edge Servers (ESs). However, achieving efficient offloading under dynamic mobility, fluctuating link quality, and constrained resources remains a significant challenge. To address this, we propose MSQ, a lightweight and adaptive three-dimensional decision offloading model that jointly incorporates Mobility, Sociality, and QoS awareness. MSQ employs Kalman filtering for mobility prediction, Rényi entropy to quantify social affinity among mobile users, and Affinity Propagation (AP) clustering to reduce redundant ES candidates while balancing computational load. Comprehensive experiments across small and medium-scale MEC networks demonstrate that MSQ reduces average task delay by up to 78%, energy consumption by 66%, and load imbalance by 64% compared with a random offloading strategy while having decision latency below one millisecond. Moreover, MSQ lowers the 95th-99th percentile tail delays by 35-45%, ensuring smoother and more reliable user experience in real- time applications. These results confirm that MSQ offers a scalable, low-latency, and energy-efficient offloading decision suitable for dynamic and intelligent edge systems.
[...] Read more.To address the growing multi-user interference in dense wireless networks, we propose an interference-aware Deep Quantum Neural Network (DQNN) for channel estimation in the Non-orthogonal multiple access (NOMA) systems. The proposed method incorporates a hybrid classical-quantum architecture. A Transformer-encoder processes the pilot signals to extract spatiotemporal features. A parameterized quantum circuit maps the processed features into a high-dimensional Hilbert space. The enhancement hinges on an Adaptive Energy Valley Optimization (AEVO) algorithm, which modifies the optimization trajectory using interference-aware preconditioners derived from the interference covariance structure. With the aid of these preconditioners, the DQNN can steer through the NOMA's non-convex terrain characterized by interference to enhance estimation performance. Moreover, interference-aware preconditioning is achieved through a lightweight neural network which adapts to time-varying interference. The successive interference cancellation decoder uses the estimated channel matrix to recover symbols. By further analysing the results, it is noticed that the quantum-enhanced machine learning delivers better results than the classical ones. The proposed framework enhances the state-of-the-art in NOMA channel estimation, while also providing a general framework for interference-aware optimization in quantum machine learning. At 10 dB SNR, the AEVO-DQNN method with a 16x16 antenna array obtained a minimum NMSE of 0.012288 and a minimum BER of 0.013023. Further, the proposed method outperforms the competing methods in terms of NMSE/BER mean with 95% confidence intervals, interference rejection ratio analysis, sensitivity to estimation error and estimated interference covariance, and paired t-test analysis.
[...] Read more.With the rapid increase in malware threats, robust classification methods have become essential to protect digital environments. This study conducts a comparative analysis of machine learning and deep learning methods for malware detection. A variety of models are used from both machine learning and deep learning paradigms to determine their effectiveness in distinguishing malware. To further refine the models, several feature selection techniques are applied to reduce the dimensionality of the data and enhance performance. Performance metrics, including accuracy, precision, recall, and F1-score is used to evaluate each model. The findings indicate that while deep learning approaches generally provide higher detection accuracy, feature selection methods contribute significantly to improving machine learning models in terms of performance and computational efficiency. This analysis offers valuable insights into the balance between model complexity and effectiveness, providing practical recommendations for implementing malware classification systems in real-world applications.
[...] Read more.The increase in cyber attacks leads to significant challenges to the security in Fog based IoT environments. Existing studies have been implementing machine learning (ML), ensemble learning (EL) and deep learning (DL) for security, in this study we opted deep ensemble learning (DEL) for detection of threats in fog based IoT environments. The proposed DEL model is build using Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), and Gated Re-current Units (GRU) as base models, and it is augmented using a metalearner such as Logistic Regression (LR), Random Forest (RF), AdaBoost, XGBoost, CatBoost, and LightGBM and also with a Voting Classifier (VC) for f inding the best model. In our experimentation, the evaluation is performed with different datasets such as DDoS SDN, NSL-KDD, UNSW-NB-15, and IoTID20. In this work, DEL with RF achieved better performance than other models when performance metrics such as accuracy (Acy), precision (Prn), recall (Rcl), F1-Score (F1-S) and AUC-score are considered. For instance, DEL with RF achieved an Acy of 99.99%, Prn of 100%, Rcl of 99.96%, F1-S of 99.98% and AUC-score of 1.00 on IoTID20 dataset. Afterward, to analyze the network performance of the DEL models at fog, we have considered the metrics such as cost, energy, resource utilization, latency and service time. This work shows that DELmodels can improve the security of fog assisted IoT systems.
[...] Read more.With the rapid development of mobile technologies, analyzing data generated by mobile devices is becoming increasingly important. A wide range of applications, from marketing to healthcare, require the development of effective methods for extracting valuable information from this data. This study is devoted to developing a methodology for assessing an individual's social capital based on the analysis of mobile communication data. To assess social capital, we propose a two-stage Cascade Model that combines the advantages of the Random Forest (RF) and Logistic Regression (LR) algorithms. In the first stage, RF is used to select the most significant features reflecting various aspects of social capital. In the second stage, LR is used to assess of social capital, taking into account nonlinear relationships between features. The results of the study open up new opportunities for studying social phenomena and can be used in as sociology, marketing, and urban planning.
[...] Read more.Wireless Sensor Networks consist of energy constrained sensor nodes that monitor and transmit data to a central base station. These networks are highly susceptible to link and node failures, which further degrades performance and reduce overall network reliability. In this paper we have addresses these challenges and proposed a reinforcement learning based self-healing routing (RL-SHR) protocol, implemented in NS2 simulation environment. In the work, each node functions as an autonomous RL agent that learns optimal routing paths by interacting with the network environment and adapting to failure conditions. The protocol enables nodes to dynamically avoid unreliable paths, recover from faults, and optimize performance over time. Simulation results shows that the proposed protocol significantly outperforms traditional routing protocols such as AODV and DSR in terms of packet delivery ratio, end-to-end delay, energy consumption and network lifetime under varying failure scenarios. This work lays the groundwork for integrating learning based resilience mechanisms into next generation sensor networks.
[...] Read more.Due to the dynamic nature of the network architecture, resource constraints, and susceptibility to security attacks, securing data transmission in Mobile Ad-hoc Networks (MANETs) is a significant problem. This work proposes a novel Equivariant Quantum Neural Networks with Adaptive Tangent Brakerski-Gentry Vaikuntanathan Homomorphic Encryption algorithm (EQNN-ATBGVHEA)- based secure routing in MANET. The suggested approach comprises three steps: cluster head (CH) selection, optimal path selection, and secure data transfer. Initially, the Bowerbird Optimization Algorithm chooses the CH and sends the message through the constructed path. Once the clusters are established, data is transferred between the sender and receiver. For optimal route selection, developed the EQNNs technique which incorporates a neural network for quick route selection. EQNN resolves the issues of local optimality by constructing a new fitness process based on residual energy (RE) and delay. After the optimal path selection, Data transfer is secured by the innovative ATBGVHEA technique. Furthermore, this method is built using NS3, and the variables are determined. Additionally, the acquired results are contrasted with existing approaches for validating the efficiency of the suggested strategy. The developed method achieved a clustering accuracy of 98.5%, a computational time of 55ms, and a residual energy of 0.44.
[...] Read more.An innovative approach to identifying rapidly spreading false information is to create a targeted graph and its subsequent clustering. A method for detecting rapidly spreading fake messages in social networks has been developed. K-means, Louvain, and Leiden algorithms were applied to identify large communities in graphs, enabling the rapid detection of fake news. A modified fake news detection algorithm based on k-means and Leiden can group fake news clusters, enabling rapid identification of widely spreading news. The combination of Leiden for structural analysis of communities and SVM for classification provides an optimal balance between accuracy (F1-score = 0.87) and completeness of fake detection (Recall = 97%), allowing the system to be used both for analysing large datasets and for monitoring new publications. The Lei-den algorithm demonstrated the highest modularity (Q = 0.7212), which is 4.8% better than Louvain (Q = 0.6884), and detected 40 structural communities. The modified method has a lower modularity (Q = 0.5584), since modularity is not calculated for K-means.
[...] Read more.Device-to-Device (D2D) communications in 5G enabled vanet Networks offer significant advantages in terms of improved communication efficiency and reduced latency. However, ensuring secure and efficient key agreement among devices remains a critical challenge. In this study, we present a novel lightweight framework for D2D communications that addresses these concerns by employing a Signcryption-based key agreement scheme [1]. The proposed scheme is built on the foundation of Diffie-Hellman Hyper Elliptic Curve Cryptography and leverages two one-way cryptographic hash functions to enhance security. By integrating the signcryption technique, our framework achieves a seamless combination of encryption and signing [2], reducing computational overhead and conserving network resources in resource-constrained 5G-enabled devices. Furthermore, we prioritize user location privacy in our framework by employing advanced techniques, including the Chinese Remainder Theorem. This ensures that location information is protected and not exposed to unauthorized parties during D2D communication sessions. Through extensive simulations and performance evaluations using ns3, we demonstrate the effectiveness and efficiency of our proposed key agreement scheme for D2D communications in 5G enabled vanet Networks. The results show improved communication performance and reduced resource consumption, making our framework a promising solution for secure and efficient D2D interactions in the context of evolving 5G networks.
[...] Read more.The rapid extension of the Internet of Things (IoT) has introduced significant concerns, particularly in ensuring data security and safeguarding sensitive and private data. The integration of Federated Learning into IoT architecture has occurred as a covenanting solution to address the risks of data breaches, resource efficiency, and the challenges of data privacy and security. This paper presents a novel lightweight framework tailored for resource-constrained IoT devices that integrates Federated Learning and Tiny Machine Learning (TinyML) to deploy lightweight, reliable models on edge devices. Our experimental results show that the proposed approach can improve efficiency, reduce communication overhead, and enhance privacy preservation.
[...] Read more.A Wireless Sensor Network (WSN) is an efficient system for monitoring distributed areas and controlling environments; however, such networks are susceptible to malicious node attacks that bring forth network insecurity and untrustworthy data. WSNs are vulnerable to malicious nodes and cyber attackers that can interfere with data transmission, leading to compromised decision-making systems. Traditional security techniques against WSNs lack flexibility in real-time detection and data integrity because of constrained processing resources and vulnerabilities from centralized storage. This work aims to improve detection accuracy through a multi-stage strategy, which constitutes the general objective of this research. The presented model uses WSN-DS and WSN-BFSF datasets. The data are pre-processed using Localized-Global Depth Normalization for uniformity, followed by feature selection via Boosted Tern-Cat Hunting Optimization, which combines Cat Hunting Optimization and Boosted Sooty Tern techniques to reduce dimensionality. The attack detection is performed by a Parallel Triple Graph Attention-based Convolution Network, which employs Quantum Parallel Deep Convolution and Triple Graph Attention Networks. The RMRO optimizes the model's parameters to classify more accurately, and the benign data are safely stored through the Consensus-Aided PoA Decision Blockchain Engine and InterPlanetary File System. This approach achieved 99.4% accuracy, 99.3% recall, and 99.5% F1 score on the WSN-DS dataset and 99.2% accuracy, 99.1% precision, and 99.3% F1 score on the WSN-BFSF dataset while showing robustness across different combinations of sensors. Hence, the Tri-QPdCNet offers a pioneering approach toward securing WSNs from dynamic and persistent attacks by providing an improved framework for anomaly detection using a strong, scalable architecture, augmented with blockchain technology. That leads to more robust WSN infrastructures that can be more securely and smoothly deployed in real-time critical environments.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.For solving the crimes committed on digital materials, they have to be copied. An evidence must be copied properly in valid methods that provide legal availability. Otherwise, the material cannot be used as an evidence. Image acquisition of the materials from the crime scene by using the proper hardware and software tools makes the obtained data legal evidence. Choosing the proper format and verification function when image acquisition affects the steps in the research process. For this purpose, investigators use hardware and software tools. Hardware tools assure the integrity and trueness of the image through write-protected method. As for software tools, they provide usage of certain write-protect hardware tools or acquisition of the disks that are directly linked to a computer. Image acquisition through write-protect hardware tools assures them the feature of forensic copy. Image acquisition only through software tools do not ensure the forensic copy feature. During the image acquisition process, different formats like E01, AFF, DD can be chosen. In order to provide the integrity and trueness of the copy, hash values have to be calculated using verification functions like SHA and MD series. In this study, image acquisition process through hardware-software are shown. Hardware acquisition of a 200 GB capacity hard disk is made through Tableau TD3 and CRU Ditto. The images of the same storage are taken through Tableau, CRU and RTX USB bridge and through FTK imager and Forensic Imager; then comparative performance assessment results are presented.
[...] Read more.Passwords can be used to gain access to specific data, an account, a computer system or a protected space. A single user may have multiple accounts that are protected by passwords. Research shows that users tend to keep same or similar passwords for different accounts with little differences. Once a single password becomes known, a number of accounts can be compromised. This paper deals with password security, a close look at what goes into making a password strong and the difficulty involved in breaking a password. The following sections discuss related work and prove graphically and mathematically the different aspects of password securities, overlooked vulnerabilities and the importance of passwords that are widely ignored. This work describes tests that were carried out to evaluate the resistance of passwords of varying strength against brute force attacks. It also discusses overlooked parameters such as entropy and how it ties in to password strength. This work also discusses the password composition enforcement of different popular websites and then presents a system designed to provide an adaptive and effective measure of password strength. This paper contributes toward minimizing the risk posed by those seeking to expose sensitive digital data. It provides solutions for making password breaking more difficult as well as convinces users to choose and set hard-to-break passwords.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E based model of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.
[...] Read more.Classification is the technique of identifying and assigning individual quantities to a group or a set. In pattern recognition, K-Nearest Neighbors algorithm is a non-parametric method for classification and regression. The K-Nearest Neighbor (kNN) technique has been widely used in data mining and machine learning because it is simple yet very useful with distinguished performance. Classification is used to predict the labels of test data points after training sample data. Over the past few decades, researchers have proposed many classification methods, but still, KNN (K-Nearest Neighbor) is one of the most popular methods to classify the data set. The input consists of k closest examples in each space, the neighbors are picked up from a set of objects or objects having same properties or value, this can be considered as a training dataset. In this paper, we have used two normalization techniques to classify the IRIS Dataset and measure the accuracy of classification using Cross-Validation method using R-Programming. The two approaches considered in this paper are - Data with Z-Score Normalization and Data with Min-Max Normalization.
[...] Read more.Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.Present research work describes advancement in standard routing protocol AODV for mobile ad-hoc networks. Our mechanism sets up multiple optimal paths with the criteria of bandwidth and delay to store multiple optimal paths in the network. At time of link failure, it will switch to next available path. We have used the information that we get in the RREQ packet and also send RREP packet to more than one path, to set up multiple paths, It reduces overhead of local route discovery at the time of link failure and because of this End to End Delay and Drop Ratio decreases. The main feature of our mechanism is its simplicity and improved efficiency. This evaluates through simulations the performance of the AODV routing protocol including our scheme and we compare it with HLSMPRA (Hot Link Split Multi-Path Routing Algorithm) Algorithm. Indeed, our scheme reduces routing load of network, end to end delay, packet drop ratio, and route error sent. The simulations have been performed using network simulator OPNET. The network simulator OPNET is discrete event simulation software for network simulations which means it simulates events not only sending and receiving packets but also forwarding and dropping packets. This modified algorithm has improved efficiency, with more reliability than Previous Algorithm.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Passwords can be used to gain access to specific data, an account, a computer system or a protected space. A single user may have multiple accounts that are protected by passwords. Research shows that users tend to keep same or similar passwords for different accounts with little differences. Once a single password becomes known, a number of accounts can be compromised. This paper deals with password security, a close look at what goes into making a password strong and the difficulty involved in breaking a password. The following sections discuss related work and prove graphically and mathematically the different aspects of password securities, overlooked vulnerabilities and the importance of passwords that are widely ignored. This work describes tests that were carried out to evaluate the resistance of passwords of varying strength against brute force attacks. It also discusses overlooked parameters such as entropy and how it ties in to password strength. This work also discusses the password composition enforcement of different popular websites and then presents a system designed to provide an adaptive and effective measure of password strength. This paper contributes toward minimizing the risk posed by those seeking to expose sensitive digital data. It provides solutions for making password breaking more difficult as well as convinces users to choose and set hard-to-break passwords.
[...] Read more.Malware detection using Machine Learning techniques has gained popularity due to their high accuracy. However, ML models are susceptible to Adversarial Examples, specifically crafted samples intended to deceive the detectors. This paper presents a novel method for generating evasive AEs by augmenting existing malware with a new section at the end of the PE file, populated with binary data using memetic algorithms. Our method hybridizes global search and local search techniques to achieve optimized results. The Malconv Model, a well-known state-of-the-art deep learning model designed explicitly for detecting malicious PE files, was used to assess the evasion rates. Out of 100 tested samples, 98 successfully evaded the MalConv model. Additionally, we investigated the simultaneous evasion of multiple detectors, observing evasion rates of 35% and 44% against KNN and Decision Tree machine learning detectors, respectively. Furthermore, evasion rates of 26% and 10% were achieved against Kaspersky and ESET commercial detectors. In order to prove the efficiency of our memetic algorithm in generating evasive adversarial examples, we compared it to the most used evolutionary-based attack: the genetic algorithm. Our method demonstrated significantly superior performance while utilizing fewer generations and a smaller population size.
[...] Read more.Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.
[...] Read more.Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occur. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study we assess five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux for their discovered vulnerabilities and the risk associated in each. Each discovered and reported vulnerability has an Exploitability score assigned in CVSS [27] of the national vulnerability data base. We compare the risk from vulnerabilities in each of the five Operating Systems. The Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability [11, 21, 22]. Statistical methodology and underlying mathematical approach is described. The analysis includes all the reported vulnerabilities in the National Vulnerability Database [19] up to October 30, 2018. Initially, parametric procedures are conducted and measured. There are however violations of some assumptions observed. Therefore, authors recognized the need for non-parametric approaches. 6838 vulnerabilities recorded were considered in the analysis.
According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk level for some operating systems. This indicates that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant Test results revealing a statistically significant difference in the Risk levels of different OSs are presented.