ISSN: 2074-9090 (Print)
ISSN: 2074-9104 (Online)
DOI: https://doi.org/10.5815/ijcnis
Website: https://www.mecs-press.org/ijcnis
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 141
IJCNIS is committed to bridge the theory and practice of computer network and information security. From innovative ideas to specific algorithms and full system implementations, IJCNIS publishes original, peer-reviewed, and high quality articles in the areas of computer network and information security. IJCNIS is well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of computer network, information security, and their applications.
IJCNIS has been abstracted or indexed by several world class databases: Scopus, SCImago, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, ProQuest, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJCNIS Vol. 18, No. 1, Feb. 2026
REGULAR PAPERS
Malware detection is a significant factor in establishing effective cybersecurity in the face of constantly increasing cyber threats. This research article aims to investigate the field of machine learning (ML) techniques for malware detection. More specifically, the paper focuses on the Customized K-Nearest Neighbors (C-KNN) classifier and the Firefly Algorithm (FA). The work aims to assess the effectiveness of C-KNN and C-KNN with FA (C-KNN/FA) in malware identification using the MalMem-2022 dataset. The novelty of the proposed method lies in the synergistic integration of the C-KNN algorithm with the FA for metaheuristic optimization. The use of FA to select the most relevant features enables the C-KNN to train on a small and high-quality feature set. Therefore, the performance of malware detection will be improved. We compare the performance of both methods to understand the influence of KNN parameter adjustment and feature selection on malware classification. The C-KNN and C-KNN/FA have produced remarkable results in malware identification, reaching an accuracy of 99.98%. This accomplishment is quite encouraging. With regard to multiclass and binary classification methods, C-KNN and C-KNN/FA both perform better than their alternatives.
[...] Read more.It is essential and unavoidable to detect Malware on the Internet, as a wide range of online IT services are available. Portable Executable files are the most frequently targeted platform by Malware. Malware must be promptly identified and alerted in a real-world environment by establishing a deployable learning system. The researchers applied machine learning to a Malware dataset, observing the model's performance metrics at a high computational cost, but were unable to deploy the model in a real-world environment. A deployable machine learning model using RF, attaining an accuracy of 97.16%, precision of 95.21%, and F1 score of 95.24% is achieved in the proposed research work, which is particularly adept at accurately identifying Malware. We have developed a novel classification model that employs the Support Vector Machine (SVM) to classify preprocessed data, detecting malware and normal instances. Furthermore, the SHAP technique identifies significant features, including SizeOfStackReserve, DllCharacteristics, and MajorImageVersion. The use of SHAP values facilitates an understanding of the characteristics of each feature in the model's prediction. Employing the SHAP algorithm using the trained SVM model to reduce the features, attained an accuracy of 97.16%.
[...] Read more.Wireless Sensor Networks (WSNs) play a crucial role in various domains, such as environmental monitoring, health, and military applications. These applications necessitate the establishment of secure and efficient communication. This network encounters a major issue since routing attacks along with data tampering are highly prevalent in such networks due to their decentralized architecture and limited resources for computation that make the networks susceptible to a wide range of security threats. The existing techniques-WSN-Block, CEMT, TSRP, ORASWSN, POA-DL, AI-WSN, and EOSR are extensively used for routing but experience inefficiencies in optimizing paths that increases energy consumption drastically and also allows packet loss. In addition, the existing blockchain models for WSN security are not scalable; have high overheads of computation. To address these limitations, we propose the Secure Blockchain-Based Routing with Narwhal Optimization for WSNs (OpNa-SGCDN). Our approach employs Optimized Narwhal-Based Metaheuristic for guaranteed shortest-path communication and minimum energy consumption in optimum routing. Moreover, we provide Scalable Permissionless Blockchain Consensus Model (SP-BlockCM) features enhanced to yield a decentralized solution that is tamper-proof but with improved scalability. The attack detection function is designed by making use of a Stacked Bi-Tier Convolutional Deep Network (SBT-CDN), which is optimized by the Snow Geese Evolutionary Algorithm (SGEA). Experimental results demonstrate that our method improves energy efficiency with 94.7% as well as achieves higher detection accuracy for 96.3% and packet loss for 95.5%, both security and performance, and thus it is better than the available methods. The framework given here is thus obviously comprehensive as well as scalable for secure, energy-efficient WSN communication.
[...] Read more.The global shift from 5G to 6G wireless communication networks presents immense challenges in managing resources for ultra-dense, heterogeneous, and latency-sensitive 6G applications such as holographic communications, autonomous systems, and the Internet of Everything (IoE). Traditional resource allocation methods struggle to meet the dynamic and complex demands of 6G, leading to inefficiencies, higher latency, and fairness issues. To address these challenges, we propose a novel framework called Proof-of-Resource enabled 6G Resource Management Using Quaternion-Attentive Cascaded Capsule Networks (Caps-PoR). Our approach integrates Quaternion-Attentive Cascaded Deep Capsule Networks (Q-AtCapsN) to improve the accuracy of predicting resource demands by capturing real-time multi-dimensional dependencies. Additionally, we optimize resource allocation dynamically through an Enhanced Collaborative Learning Algorithm (ECoLA), which supports decentralized decision-making across multiple nodes, significantly reducing latency. The Proof-of-Resource mechanism ensures transparency, fairness, and trust, preventing resource misallocation while ensuring equal access. Performance evaluations show that Caps-PoR outperforms traditional methods in 6G multi-access edge computing (MEC) scenarios, achieving over 98% resource utilization efficiency, a latency reduction exceeding 96%, and a user satisfaction rate of more than 97%. This demonstrates how Caps-PoR effectively enhances efficiency, security, and scalability in next-generation 6G networks, reshaping the future of resource management in decentralized systems.
[...] Read more.Mobile Adhoc Networks are not without challenges such as limitations with scaling, and problems with excessive power consumptions which is mostly attributed to battery consumption. The movement of nodes causes an increasing use of energy, delay, and the shortened lifetime of nodes. A solution to these problems is a resource-efficient City Block-K Means Algorithm (CB-K Means) clustering method that prioritizes resource optimization and the lifespan of the network. These are localization, selection of Cluster Head (CH) and mobile node clustering. Mobile Node (MN) localization is performed using a Crossover and Mutation-based Jaya Optimization Algorithm (CM-JOA), followed by clustering through CB-K Means, where the CHs are selected with a linear scaling-based Satin Bowerbird Optimization Algorithm (LS-SBOA). The experimental results indicate that there is an improvement by 3.48% in the PDR (Packet Delivery Ratio), 7.3 percent packet loss, reduced clustering time of 3412ms and enhanced throughput of 889 bps.
[...] Read more.Precision agriculture relies on wireless sensor networks (WSNs) to support informed decision-making, thereby enhancing crop yields and resource management. A critical challenge in such networks is minimizing the energy consumption of sensor nodes while ensuring reliable data transmission. Sensor nodes are grouped using an optimal multi-objective clustering approach, which also chooses appropriate cluster heads (CH) for effective communication. By combining the exploration power of the Osprey Optimization Algorithm with the exploitation power of the Parrot Optimizer, a hybrid optimization approach improves CH selection. A hybrid deep learning framework, combining a convolutional autoencoder with a dual-key transformer network, is designed to monitor energy utilization and detect constraints affecting consumption. Training and testing performance of this framework is further improved using a metaheuristic based on the cooperative feeding and locomotion behavior of gooseneck barnacles. Experimental evaluation demonstrates superior performance, achieving 99.2% accuracy, 68 kbps throughput, 98% packet delivery ratio, and a network lifetime of 85 ms. With an average delay of 0.23 seconds, energy consumption is decreased to 39 J, demonstrating the effectiveness of the suggested strategy for dependable and sustainable precision agriculture applications.
[...] Read more.In this research, we propose an integrated routing protocol termed Bacterial Foraging inspired Mamdani Fuzzy Inference based AODV (BF-MFI-AODV) for Vehicular Ad-hoc Networks (VANETs), which combines Mamdani Fuzzy Inference System (MFIS) and Bacterial Foraging Optimization (BFO) techniques. The protocol aims to address the challenges of dynamic and unpredictable network conditions in VANETs by leveraging fuzzy logic and bio-inspired optimization principles. BF-MFI-AODV enhances route discovery, maintenance, and optimization mechanisms, resulting in improved adaptability, reliability, and efficiency of communication. Through extensive simulations and real-world experiments, the performance of BF-MFI-AODV is evaluated in terms of packet delivery ratio, end-to-end delay, routing overhead, and network lifetime. Our results demonstrate the effectiveness of BF-MFI-AODV in enhancing the overall performance of VANETs compared to existing routing protocols. The proposed protocol shows promise in providing robust and efficient communication solutions for dynamic vehicular environments, thus contributing to the advancement of intelligent transportation systems.
[...] Read more.The Internet of Things and cloud computing are expanding at a very rapid rate, which has posed a great challenge in maintaining data security and integrity, particularly during forensic investigation. The conventional logging mechanisms are prone to manipulation, unreliable, and difficult to verify the digital evidence. In response to these problems, a blockchain-based system is suggested to facilitate the security and reliability of forensic data stored on cloud-based environments, which is related to IoT devices. The decentralized storage is paired with the smart contract technology to form an immutable version of the cloud communications to make sure that the evidence is unaltered to guarantee its verifiability. It further has a safe off-chain storage system enabling swift records and recalls of massive forensic records. The enormous amount of experimentation has demonstrated that the system minimizes the verification times to about 28 to 39 milliseconds. It is quicker than the methods that are currently in place and has high data integrity. The framework enhances transaction throughput as well as provides scalable solution to preserve forensic evidence. It has offered a feasible and reliable platform to enhance the security, visibility and reliability of forensic data within intricate IoT and cloud environments. These characteristics aid law enforcement groups and forensic investigators in having effective and credible investigations.
[...] Read more.Advanced intrusion detection systems are required due to the quick uptake of cloud computing and the growing complexity of cyber threats, especially Denial of Service and Distributed Denial of Service attacks. Deep learning architectures are becoming more popular because traditional IDS techniques frequently falter in dynamic, large-scale settings. Using datasets including CICIDS2017, NSL-KDD, and UNSW-NB15, this paper assesses the effectiveness of well-known DL architectures for intrusion detection, including Convolutional Neural Network, Recurrent Neural Networks, Long Short-Term Memory, and others. Key performance indicators such as accuracy, precision, and false positive rates are examined to compare the efficacy of these models. The findings show that some designs, like ResNet and Self-Organizing Map, perform well in structured environments but poorly on complicated datasets like KDDTest-21. Another important data gap highlighting the need for more research in this area is that most models do not automatically adjust to unexpected threats. This work aids in the creation of intelligent, scalable systems for changing network environments by evaluating the efficacy of DL-based IDS solutions.
[...] Read more.We propose a meta-learning-enhanced BiLSTM autoencoder architecture for robust one-bit error correction coding, designed to dynamically adapt to diverse channel conditions without requiring explicit retraining. The proposed method fuses a channel-aware meta-discriminator into an adversarial training framework, allowing the system to generalize across Rician, Rayleigh, and AWGN channels by adapting its decision boundaries based on temporal signal statistics. The meta-discriminator, realized as a lightweight Transformer-encoder with cross-attention, computes channel-specific embeddings from the received signal, which modulate the adversarial loss and guide the reconstruction process. Furthermore, the BiLSTM encoder-decoder utilizes bidirectional layers with residual connections to capture long-range dependencies, while a learnable one-bit quantizer with adaptive thresholds ensures efficient signal representation. The training objective combines reconstruction loss, adversarial loss, and a meta-regularization term, which stabilizes updates and refines adaptation. The meta-discriminator performs real-time parameter adjustments using a single gradient step during inference to make the system resilient to unseen channel impairments. The experiments demonstrate significant improvements in BER and MSE across various fading channels and data sizes. The Rician channel exhibits the lowest values of BER and MSE of 0.032 and 0.031, respectively, when considering a data size of 2500 symbols. The proposed work shows its dual capability to learn error-correcting codes through BiLSTMs, apart from exploiting meta-learning for channel adaptation.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.For solving the crimes committed on digital materials, they have to be copied. An evidence must be copied properly in valid methods that provide legal availability. Otherwise, the material cannot be used as an evidence. Image acquisition of the materials from the crime scene by using the proper hardware and software tools makes the obtained data legal evidence. Choosing the proper format and verification function when image acquisition affects the steps in the research process. For this purpose, investigators use hardware and software tools. Hardware tools assure the integrity and trueness of the image through write-protected method. As for software tools, they provide usage of certain write-protect hardware tools or acquisition of the disks that are directly linked to a computer. Image acquisition through write-protect hardware tools assures them the feature of forensic copy. Image acquisition only through software tools do not ensure the forensic copy feature. During the image acquisition process, different formats like E01, AFF, DD can be chosen. In order to provide the integrity and trueness of the copy, hash values have to be calculated using verification functions like SHA and MD series. In this study, image acquisition process through hardware-software are shown. Hardware acquisition of a 200 GB capacity hard disk is made through Tableau TD3 and CRU Ditto. The images of the same storage are taken through Tableau, CRU and RTX USB bridge and through FTK imager and Forensic Imager; then comparative performance assessment results are presented.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E based model of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.
[...] Read more.Passwords can be used to gain access to specific data, an account, a computer system or a protected space. A single user may have multiple accounts that are protected by passwords. Research shows that users tend to keep same or similar passwords for different accounts with little differences. Once a single password becomes known, a number of accounts can be compromised. This paper deals with password security, a close look at what goes into making a password strong and the difficulty involved in breaking a password. The following sections discuss related work and prove graphically and mathematically the different aspects of password securities, overlooked vulnerabilities and the importance of passwords that are widely ignored. This work describes tests that were carried out to evaluate the resistance of passwords of varying strength against brute force attacks. It also discusses overlooked parameters such as entropy and how it ties in to password strength. This work also discusses the password composition enforcement of different popular websites and then presents a system designed to provide an adaptive and effective measure of password strength. This paper contributes toward minimizing the risk posed by those seeking to expose sensitive digital data. It provides solutions for making password breaking more difficult as well as convinces users to choose and set hard-to-break passwords.
[...] Read more.Classification is the technique of identifying and assigning individual quantities to a group or a set. In pattern recognition, K-Nearest Neighbors algorithm is a non-parametric method for classification and regression. The K-Nearest Neighbor (kNN) technique has been widely used in data mining and machine learning because it is simple yet very useful with distinguished performance. Classification is used to predict the labels of test data points after training sample data. Over the past few decades, researchers have proposed many classification methods, but still, KNN (K-Nearest Neighbor) is one of the most popular methods to classify the data set. The input consists of k closest examples in each space, the neighbors are picked up from a set of objects or objects having same properties or value, this can be considered as a training dataset. In this paper, we have used two normalization techniques to classify the IRIS Dataset and measure the accuracy of classification using Cross-Validation method using R-Programming. The two approaches considered in this paper are - Data with Z-Score Normalization and Data with Min-Max Normalization.
[...] Read more.Represented paper is currently topical, because of year on year increasing quantity and diversity of attacks on computer networks that causes significant losses for companies. This work provides abilities of such problems solving as: existing methods of location of anomalies and current hazards at networks, statistical methods consideration, as effective methods of anomaly detection and experimental discovery of choosed method effectiveness. The method of network traffic capture and analysis during the network segment passive monitoring is considered in this work. Also, the processing way of numerous network traffic indexes for further network information safety level evaluation is proposed. Represented methods and concepts usage allows increasing of network segment reliability at the expense of operative network anomalies capturing, that could testify about possible hazards and such information is very useful for the network administrator. To get a proof of the method effectiveness, several network attacks, whose data is storing in specialised DARPA dataset, were chosen. Relevant parameters for every attack type were calculated. In such a way, start and termination time of the attack could be obtained by this method with insignificant error for some methods.
[...] Read more.The Internet of Things (IoT) is one of the promising technologies of the future. It offers many attractive features that we depend on nowadays with less effort and faster in real-time. However, it is still vulnerable to various threats and attacks due to the obstacles of its heterogeneous ecosystem, adaptive protocols, and self-configurations. In this paper, three different 6LoWPAN attacks are implemented in the IoT via Contiki OS to generate the proposed dataset that reflects the 6LoWPAN features in IoT. For analyzed attacks, six scenarios have been implemented. Three of these are free of malicious nodes, and the others scenarios include malicious nodes. The typical scenarios are a benchmark for the malicious scenarios for comparison, extraction, and exploration of the features that are affected by attackers. These features are used as criteria input to train and test our proposed hybrid Intrusion Detection and Prevention System (IDPS) to detect and prevent 6LoWPAN attacks in the IoT ecosystem. The proposed hybrid IDPS has been trained and tested with improved accuracy on both KoU-6LoWPAN-IoT and Edge IIoT datasets. In the proposed hybrid IDPS for the detention phase, the Artificial Neural Network (ANN) classifier achieved the highest accuracy among the models in both the 2-class and N-class. Before the accuracy improved in our proposed dataset with the 4-class and 2-class mode, the ANN classifier achieved 95.65% and 99.95%, respectively, while after the accuracy optimization reached 99.84% and 99.97%, respectively. For the Edge IIoT dataset, before the accuracy improved with the 15-class and 2-class modes, the ANN classifier achieved 95.14% and 99.86%, respectively, while after the accuracy optimized up to 97.64% and 99.94%, respectively. Also, the decision tree-based models achieved lightweight models due to their lower computational complexity, so these have an appropriate edge computing deployment. Whereas other ML models reach heavyweight models and are required more computational complexity, these models have an appropriate deployment in cloud or fog computing in IoT networks.
[...] Read more.Present research work describes advancement in standard routing protocol AODV for mobile ad-hoc networks. Our mechanism sets up multiple optimal paths with the criteria of bandwidth and delay to store multiple optimal paths in the network. At time of link failure, it will switch to next available path. We have used the information that we get in the RREQ packet and also send RREP packet to more than one path, to set up multiple paths, It reduces overhead of local route discovery at the time of link failure and because of this End to End Delay and Drop Ratio decreases. The main feature of our mechanism is its simplicity and improved efficiency. This evaluates through simulations the performance of the AODV routing protocol including our scheme and we compare it with HLSMPRA (Hot Link Split Multi-Path Routing Algorithm) Algorithm. Indeed, our scheme reduces routing load of network, end to end delay, packet drop ratio, and route error sent. The simulations have been performed using network simulator OPNET. The network simulator OPNET is discrete event simulation software for network simulations which means it simulates events not only sending and receiving packets but also forwarding and dropping packets. This modified algorithm has improved efficiency, with more reliability than Previous Algorithm.
[...] Read more.Thanks to recent technological advancements, low-cost sensors with dispensation and communication capabilities are now feasible. As an example, a Wireless Sensor Network (WSN) is a network in which the nodes are mobile computers that exchange data with one another over wireless connections rather than relying on a central server. These inexpensive sensor nodes are particularly vulnerable to a clone node or replication assault because of their limited processing power, memory, battery life, and absence of tamper-resistant hardware. Once an attacker compromises a sensor node, they can create many copies of it elsewhere in the network that share the same ID. This would give the attacker complete internal control of the network, allowing them to mimic the genuine nodes' behavior. This is why scientists are so intent on developing better clone assault detection procedures. This research proposes a machine learning based clone node detection (ML-CND) technique to identify clone nodes in wireless networks. The goal is to identify clones effectively enough to prevent cloning attacks from happening in the first place. Use a low-cost identity verification process to identify clones in specific locations as well as around the globe. Using the Optimized Extreme Learning Machine (OELM), with kernels of ELM ideally determined through the Horse Herd Metaheuristic Optimization Algorithm (HHO), this technique safeguards the network from node identity replicas. Using the node identity replicas, the most reliable transmission path may be selected. The procedure is meant to be used to retrieve data from a network node. The simulation result demonstrates the performance analysis of several factors, including sensitivity, specificity, recall, and detection.
[...] Read more.There is no doubt that, even after the development of many other authentication schemes, passwords remain one of the most popular means of authentication. A review in the field of password based authentication is addressed, by introducing and analyzing different schemes of authentication, respective advantages and disadvantages, and probable causes of the ‘very disconnect’ between user and password mechanisms. The evolution of passwords and how they have deep-rooted in our life is remarkable. This paper addresses the gap between the user and industry perspectives of password authentication, the state of art of password authentication and how the most investigated topic in password authentication changed over time. The author’s tries to distinguish password based authentication into two levels ‘User Centric Design Level’ and the ‘Machine Centric Protocol Level’ under one framework. The paper concludes with the special section covering the ways in which password based authentication system can be strengthened on the issues which are currently holding-in the password based authentication.
[...] Read more.D2D (Device-to-device) communication has a major role in communication technology with resource and power allocation being a major attribute of the network. The existing method for D2D communication has several problems like slow convergence, low accuracy, etc. To overcome these, a D2D communication using distributed deep learning with a coot bird optimization algorithm has been proposed. In this work, D2D communication is combined with the Coot Bird Optimization algorithm to enhance the performance of distributed deep learning. Reducing the interference of eNB with the use of deep learning can achieve near-optimal throughput. Distributed deep learning trains the devices as a group and it works independently to reduce the training time of the devices. This model confirms the independent resource allocation with optimized power value and the least Bit Error Rate for D2D communication while sustaining the quality of services. The model is finally trained and tested successfully and is found to work for power allocation with an accuracy of 99.34%, giving the best fitness of 80%, the worst fitness value of 46%, mean value of 6.76 and 0.55 STD value showing better performance compared to the existing works.
[...] Read more.These days cloud computing is booming like no other technology. Every organization whether it’s small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.
[...] Read more.Remote access technologies encrypt data to enforce policies and ensure protection. Attackers leverage such techniques to launch carefully crafted evasion attacks introducing malware and other unwanted traffic to the internal network. Traditional security controls such as anti-virus software, firewall, and intrusion detection systems (IDS) decrypt network traffic and employ signature and heuristic-based approaches for malware inspection. In the past, machine learning (ML) approaches have been proposed for specific malware detection and traffic type characterization. However, decryption introduces computational overheads and dilutes the privacy goal of encryption. The ML approaches employ limited features and are not objectively developed for remote access security. This paper presents a novel ML-based approach to encrypted remote access attack detection using a weighted random forest (W-RF) algorithm. Key features are determined using feature importance scores. Class weighing is used to address the imbalanced data distribution problem common in remote access network traffic where attacks comprise only a small proportion of network traffic. Results obtained during the evaluation of the approach on benign virtual private network (VPN) and attack network traffic datasets that comprise verified normal hosts and common attacks in real-world network traffic are presented. With recall and precision of 100%, the approach demonstrates effective performance. The results for k-fold cross-validation and receiver operating characteristic (ROC) mean area under the curve (AUC) demonstrate that the approach effectively detects attacks in encrypted remote access network traffic, successfully averting attackers and network intrusions.
[...] Read more.Malware detection using Machine Learning techniques has gained popularity due to their high accuracy. However, ML models are susceptible to Adversarial Examples, specifically crafted samples intended to deceive the detectors. This paper presents a novel method for generating evasive AEs by augmenting existing malware with a new section at the end of the PE file, populated with binary data using memetic algorithms. Our method hybridizes global search and local search techniques to achieve optimized results. The Malconv Model, a well-known state-of-the-art deep learning model designed explicitly for detecting malicious PE files, was used to assess the evasion rates. Out of 100 tested samples, 98 successfully evaded the MalConv model. Additionally, we investigated the simultaneous evasion of multiple detectors, observing evasion rates of 35% and 44% against KNN and Decision Tree machine learning detectors, respectively. Furthermore, evasion rates of 26% and 10% were achieved against Kaspersky and ESET commercial detectors. In order to prove the efficiency of our memetic algorithm in generating evasive adversarial examples, we compared it to the most used evolutionary-based attack: the genetic algorithm. Our method demonstrated significantly superior performance while utilizing fewer generations and a smaller population size.
[...] Read more.Passwords can be used to gain access to specific data, an account, a computer system or a protected space. A single user may have multiple accounts that are protected by passwords. Research shows that users tend to keep same or similar passwords for different accounts with little differences. Once a single password becomes known, a number of accounts can be compromised. This paper deals with password security, a close look at what goes into making a password strong and the difficulty involved in breaking a password. The following sections discuss related work and prove graphically and mathematically the different aspects of password securities, overlooked vulnerabilities and the importance of passwords that are widely ignored. This work describes tests that were carried out to evaluate the resistance of passwords of varying strength against brute force attacks. It also discusses overlooked parameters such as entropy and how it ties in to password strength. This work also discusses the password composition enforcement of different popular websites and then presents a system designed to provide an adaptive and effective measure of password strength. This paper contributes toward minimizing the risk posed by those seeking to expose sensitive digital data. It provides solutions for making password breaking more difficult as well as convinces users to choose and set hard-to-break passwords.
[...] Read more.Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occur. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study we assess five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux for their discovered vulnerabilities and the risk associated in each. Each discovered and reported vulnerability has an Exploitability score assigned in CVSS [27] of the national vulnerability data base. We compare the risk from vulnerabilities in each of the five Operating Systems. The Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability [11, 21, 22]. Statistical methodology and underlying mathematical approach is described. The analysis includes all the reported vulnerabilities in the National Vulnerability Database [19] up to October 30, 2018. Initially, parametric procedures are conducted and measured. There are however violations of some assumptions observed. Therefore, authors recognized the need for non-parametric approaches. 6838 vulnerabilities recorded were considered in the analysis.
According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk level for some operating systems. This indicates that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant Test results revealing a statistically significant difference in the Risk levels of different OSs are presented.