International Journal of Computer Network and Information Security (IJCNIS)

IJCNIS Vol. 17, No. 6, Dec. 2025

Cover page and Table of Contents: PDF (size: 155KB)

Table Of Contents

REGULAR PAPERS

FED-SCADA: A Trustworthy and Energy-efficient Federated IDS for Smart Grid Edge Gateways Using SNNs and Differential Evolution

By Mohammad Othman Nassar Feras Fares AL-Mashagba

DOI: https://doi.org/10.5815/ijcnis.2025.06.01, Pub. Date: 8 Dec. 2025

The increasing digitalization of smart grid systems has introduced new cybersecurity challenges, particularly at the supervisory control and data acquisition (SCADA) edge gateways where resource constraints, latency sensitivity, and privacy concerns limit the applicability of centralized security solutions. This paper presents FED-SCADA, a novel federated intrusion detection system (IDS) that integrates Spiking Neural Networks (SNNs) for energy-efficient inference and Differential Evolution (DE) for optimizing model convergence in decentralized, non-independent and identically distributed (non-IID) environments. The proposed architecture enables real-time, privacy-preserving intrusion detection across distributed SCADA subsystems in a smart grid context. FED-SCADA is evaluated using three public IIoT/SCADA datasets: TON_IoT, Edge-IIoTset, and SWaT. FED-SCADA achieves a detection accuracy of 96.4%, inference latency of 28 ms, and energy consumption of 1.1 mJ per sample, demonstrating strong performance under real-time and energy-constrained conditions outperforming base-line federated learning models such as FedAvg-CNN and FedSVM. A detailed methodology flowchart and pseudocode are included to support reproducibility. To the best of our knowledge, this is the first study to combine neuromorphic computing, evolutionary optimization, and federated learning for trustworthy and efficient smart grid cybersecurity.

[...] Read more.
Reliable Communication in Delay Tolerant Network by Utilizing the Concept of Acknowledgement based Hop by Hop Retransmission

By S. Dheenathayalan

DOI: https://doi.org/10.5815/ijcnis.2025.06.02, Pub. Date: 8 Dec. 2025

A delay-tolerant network is one that may temporarily hold packets at intermediate nodes while waiting for an end-to-end route to be rebuilt or restored. Due to the difficulty of establishing reliable routing in such a network, we use the Wavelength Routing Algorithm-based idea of hop-by-hop retransmission acknowledgement. It calculates the blocking probability for each connection request and decides whether the connection proceeds or not. This helps to reduce power consumption and the available resources. During retransmission, the intermediate nodes may have some duplicate messages; we utilize the concept of Cuckoo search to omit the duplicate messages. Our proposed mechanism is implemented in the ONE simulator, which states the performance of our reliable communication in comparison with other existing algorithms.

[...] Read more.
Energy-Efficient Traffic Management Scheme for Wireless Sensor Network

By Shailaja S. Halli Poornima G. Patil

DOI: https://doi.org/10.5815/ijcnis.2025.06.03, Pub. Date: 8 Dec. 2025

Densely distributed nodes and high data flow rates close to sinks can cause serious problems for WSNs, especially concerning energy consumption and network complexity. As node and channel traffic management is essential for energy efficiency, not much research has been done on how to solve these problems. This paper presented a novel method that uses a Water Wave Game Theory algorithm to identify and characterize traffic areas that use less energy. Based on different network parameters, the algorithm calculates a fitness function that estimates player stability. Mobile sinks and nearby nodes are notified when the fitness level is low, anticipating energy-efficient traffic patterns and implicitly establishing an alarm threshold. Establish the LAFLC algorithm to tackle complex energy-efficient traffic scenarios. This algorithm optimizes system decisions about mobile data collectors, routing, and node mobility by dynamically learning and adapting to the characteristics of energy-efficient traffic. As a result, it eliminates the need for data rerouting and the replacement of multiple traffic nodes when mobile data collectors are in motion. The proposed approach demonstrates a superior Packet Delivery Ratio (PDR) of 99.95%, throughput of 3500bps, energy consumption of 0.39J, reliability of 98.8% and energy efficiency of 99.9% compared to existing techniques.

[...] Read more.
CLEFIA-based Lightweight Encryption for Resource-Constrained Systems: Design, Algorithms and Security Analysis

By Sergiy Gnatyuk Berik Akhmetov Dauriya Zhaxigulova Yuliia Polishchuk

DOI: https://doi.org/10.5815/ijcnis.2025.06.04, Pub. Date: 8 Dec. 2025

Emerging classes of distributed and embedded systems increasingly require cryptographic mechanisms that provide confidentiality, integrity, and authenticity while operating under strict limitations on computation, energy consumption, memory capacity, and communication bandwidth. Conventional symmetric and asymmetric cryptographic algorithms often fail to meet these stringent requirements. Lightweight cryptography (LWC) offers a promising solution by enabling secure real-time data transmission, command authentication, telemetry encryption, and protection of sensitive information in embedded systems. This paper presents a multicriteria analysis of widely adopted LWC algorithms, identifying the CLEFIA block cipher standardized in ISO/IEC 29192-2 as a balanced choice between security and performance. An enhanced LWC method based on the mentioned cipher is proposed, aiming to improve encryption throughput without compromising cryptographic robustness. Experimental results demonstrate that the proposed method achieves an encryption speedup over the baseline CLEFIA implementation. Furthermore, the improved algorithm successfully passes statistical randomness tests and shows increased resistance to linear and differential cryptanalysis. Notably, the cipher begins to exhibit random substitution characteristics from the third round, reinforcing its suitability for secure deployment in resource-limited environments. The results obtained in this study will be valuable for ensuring confidentiality, integrity, and authenticity in low-power and resource-constrained systems, as well as in modern information platforms where low latency is critical.

[...] Read more.
A4C: A Novel Hybrid Algorithm for Resource-Aware Scheduling in Cloud Environment

By Santhosh Kumar Medishetti Bigul Sunitha Devi Maheswari Bandi Rani Sailaja Velamakanni Rameshwaraiah Kurupati Ganesh Reddy Karri

DOI: https://doi.org/10.5815/ijcnis.2025.06.05, Pub. Date: 8 Dec. 2025

Scheduling in cloud computing is an NP-hard problem, where traditional metaheuristic algorithms often fail to deliver approximate solutions within a feasible time frame. As cloud infrastructures become increasingly dynamic, efficient Task Scheduling (TS) remains a major challenge, especially when minimizing makespan, execution time, and resource utilization. To address this, we propose the Ant Colony Asynchronous Advantage Actor-Critic (A4C) algorithm, which synergistically combines the exploratory strengths of Ant Colony Optimization (ACO) with the adaptive learning capabilities of the Asynchronous Advantage Actor-Critic (A3C) model. While ACO efficiently explores task allocation paths, it is prone to getting trapped in local optima. The integration with A3C overcomes this limitation by leveraging deep reinforcement learning for real-time policy and value estimation, enabling adaptive and informed scheduling decisions. Extensive simulations show that the A4C algorithm improves throughput by 18.7%, reduces makespan by 16%, execution time by 14.60%, and response time by 21.4% compared to conventional approaches. These results validate the practical effectiveness of A4C in handling dynamic workloads, reducing computational overhead, and ensuring timely task completion. The proposed model not only enhances scheduling efficiency but also supports quality-driven service delivery in cloud environments, making it well-suited for managing complex and time-sensitive cloud applications.

[...] Read more.
Scalable-pos: Towards Decentralized and Efficient Energy Saving Consensus in Blockchain

By Anupama B. S. Sunitha N. R. G. S. Thejas

DOI: https://doi.org/10.5815/ijcnis.2025.06.06, Pub. Date: 8 Dec. 2025

Blockchain has become peer-to-peer immutable distributed ledger technology network, and its consensus protocol is essential to the management of decentralized data. The consensus algorithm, at core of blockchain technology (BCT), has direct impact on blockchain's security, stability, decentralization, and many other crucial features. A key problem in development of blockchain applications is selecting the right consensus algorithm for various scenarios. Ensuring scalability is the most significant drawback of BCT. The industry has been rejuvenated and new architectures have been sparked by the usage of consensus protocols for blockchains(BC). Researchers analyzed shortcomings of proof of work (PoW) consensus process and subsequently, alternative protocols like proof of stake (PoS) arose. PoS, together with other improvements, lowers the unimaginably high energy usage of PoW, making it protocol of time.  In PoS, only the user with highest stake becomes the validator. To overcome this, we propose Scalable Proof of Stake (SPoS), a novel consensus protocol, which is an enhancement of PoS protocol. In the proposed algorithm, each stakeholder based on the stake gets a chance to become the validator and can mine blocks in the blockchain. Clustering of the stakeholders is done using mean shift algorithm. Each cluster gets a different number of blocks to mine in BC. Cluster with highest stake will get a greater number of blocks to mine when compared to other groups and the cluster with the least stake gets least number of blocks to mine when compared to other groups. To mine the blocks, validator is chosen based on the cluster in which he is present. Fair mining is ensured for all stakeholders based on number of stakes. Mining is distributed among all the stakeholders. Since the validators are chosen fast, the transaction rate is high in the network. Validators in PoS are selected according to the quantity of cryptocurrency they stake.  More stakeholders will get chance of validating blocks and receiving rewards.  Over time, this reduces fairness and decentralization by concentrating on wealth and power. This is addressed in SPoS using clustering-based validator assignment.

[...] Read more.
Abstractive Text Summarization: A Hybrid Evaluation of Integrating Flan-T5 (Dual Framework) with Pegasus Reveals Conciseness Advantages across Diverse Datasets

By Abdulrahman Mohsen Ahmed Zeyad Arun Biradar

DOI: https://doi.org/10.5815/ijcnis.2025.06.07, Pub. Date: 8 Dec. 2025

Abstractive summarization plays a critical role in managing large volumes of textual data, yet it faces persistent challenges in consistency and evaluation. Our study compares two state-of-the-art models, PEGASUS and Flan-T5, across a diverse range of benchmark datasets using both ROUGE and BARTScore metrics. Findings reveal that PEGASUS excels in generating detailed, coherent summaries for large-scale texts evidenced by an R-1 score of 0.5874 on Gigaword while Flan-T5, enhanced by our novel T5 Dual Summary Framework, produces concise outputs that closely align with reference lengths. Although ROUGE effectively measures lexical overlap, its moderate correlation with BARTScore indicates that it may overlook deeper semantic quality. This underscores the need for hybrid evaluation approaches that integrate semantic analysis with human judgment to more accurately capture summary meaning. By introducing a robust benchmark and the pioneering T5 Dual Framework, our research advocates for task-specific optimization and more comprehensive evaluation methods. Moreover, current dataset limitations point to the necessity for broader, more inclusive training sets in future studies.

[...] Read more.
ITD-GMJN: Insider Thread Detection in Cloud Computing using Golf Optimized MICE based Jordan Neural Network

By B. GAYATHRI

DOI: https://doi.org/10.5815/ijcnis.2025.06.08, Pub. Date: 8 Dec. 2025

Cloud computing refers to a high-level network architecture that allows consumers, authorized users, owners, and users to swiftly access and store their data. These days, the user's internal risks have a significant impact on this cloud. An intrusive party is established as a network member and presented as a user. Once they have access to the network, they will attempt to attack or steal confidential information while others are exchanging information or conversing. For the cloud network's external security, there are numerous options. But it's important to deal with internal or insider threats. Thus in the proposed work, an advanced deep learning with optimized missing value imputation is developed to mitigate insider thread in the cloud system. Behavioral log files were taken in an organization which is split into sequential data and standalone data based on the login process. This data was not ready for the detection process due to improper data samples so it was pre-processed using Multivariate Imputation by Chained Equations (MICE) imputation. In this imputation process, the estimation parameter was optimally chosen using the Golf Optimization Algorithm (GOA). After the missing values were filled, the data proceeded to the extraction process. In this, the sequential data are proceeded for the domain extractor and the standalone data are proceeded for Long Short-Term Memory-Autoencoder (LS-AE). Both features are fused to create a single data which is further given to the detection process using Jordan Neural Network (JNN). The proposed method offers 96% accuracy, 92% recall, 91.6% specificity, 8.39% fall out and 8% Miss Rate. The results showed that the recommended JNN detection model has successfully detected insider threads in a cloud system. 

[...] Read more.
Hybrid LSTM-attention Model for DDoS Attack Detection in Software-defined Networking

By Rikie Kartadie Danny Kriestanto Muhammad Agung Nugroho Chuan-Ming Liu

DOI: https://doi.org/10.5815/ijcnis.2025.06.09, Pub. Date: 8 Dec. 2025

Distributed Denial of Service (DDoS) attacks threaten Software-Defined Networking (SDN) environments, requiring effective real-time detection. This study introduces a hybrid LSTM-Attention model to improve DDoS detection in SDN, combining Long Short-Term Memory (LSTM) networks for temporal pattern recognition with an attention mechanism to prioritize key traffic features like packet and byte counts per second. Trained on 15,000 balanced samples from the SDN DDoS dataset, the model achieved 96.90% accuracy, 100% recall for DDoS instances, and a 0.97 F1-score, outperforming statistical (88.5%), machine learning (94.0%), and other deep learning (95.0%) methods. Attention weight visualization confirmed its focus on critical features. With a two-hour training time on modest hardware (Google Colab, 12 GB RAM) and an AUC of 0.99, the model is efficient and robust for real-time use. It offers a scalable, interpretable framework for network security, providing actionable insights for administrators and supporting future detection of slow-rate attacks and insider breaches. As a proof-of-concept, a subsampled slow-rate DDoS simulation (10% of volumetric spikes) achieved 89.5% accuracy with tuned attention weights, suggesting potential for rate adjustments. Preliminary tests on UNSW-NB15 subsets, focusing on behavioral features, yielded 85.2% recall, indicating that integrating user profiling could enhance real-world detection.

[...] Read more.
Assault Type Detection in WSN Based on Modified DBSCAN with Osprey Optimization Using Hybrid Classifier LSTM with XGBOOST for Military Sector

By R. Preethi

DOI: https://doi.org/10.5815/ijcnis.2025.06.10, Pub. Date: 8 Dec. 2025

Military tasks constitute the most important and significant applications of WSNs. In military, Sensor node deployment increases activities, efficient operation, saves loss of life, and protects national sovereignty. Usually, the main difficulties in military missions are energy consumption and security in the network. Another major security issues are hacking or masquerade attack. To overcome the limitations, the proposed method modified DBSCAN with OSPREY optimization Algorithm (OOA) using hybrid classifier Long Short-Term Memory (LSTM) with Extreme Gradient Boosting (XGBOOST) to detect attack types in the WSN military sector for enhancing security. First, nodes are deployed and modified DBSCAN algorithm is used to cluster the nodes to reduce energy consumption. To select the cluster head optimally by using the OSPREY optimization Algorithm (OOA) based on small distance and high energy for transfer data between the base station and nodes. Hybrid LSTM-XGBOOST classifier utilized to learn the parameter and predict the four assault types such as scheduling, flooding, blackhole and grayhole assault. Classification and network metrics including Packet Delivery Ratio (PDR), Throughput, Average Residual Energy (ARE), Packet Loss Ratio (PLR), Accuracy and F1_score are used to evaluate the performance of the model. Performance results show that PDR of 94.12%, 3.2 Mbps throughput at 100 nodes, ARE of 8.94J, PLR of 5.88%, accuracy of 96.14%, and F1_score of 95.04% are achieved. Hence, the designed model for assault prediction types in WSN based on modified DBSCAN clustering with a hybrid classifier yields better results.

[...] Read more.