Last updated: 2025-12-06 05:01 UTC
All documents
Number of pages: 152
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Ruslan Bondaruc, Nicolas Schnepf, Rémi Badonnel, Claudio A. Ardagna, Marco Anisetti | Vulnerability-Aware Secure Service Deployment in Cloud-Edge Continuum | 2025 | Early Access | Security Cloud computing Quality of service Heuristic algorithms Software Edge computing Internet of Things Resource management Computational modeling Real-time systems Service Deployment Non-Functional Properties Edge-Cloud Continuum Vulnerability Assessment | Software weaknesses and vulnerabilities are continuously discovered and rapidly evolving. Their direct and indirect interference with the business process workflow execution is neither fully understood nor addressed by the current literature. The strict control of the vulnerability footprint of the landing platform before cloud/web service workflow execution is nowadays largely used as a prevention measure in order to improve execution trustworthiness. The vulnerability footprint governance is exacerbated by the cloud, where a common execution platform hosting (vulnerable) services is shared between different tenants. The paper proposes a service workflow deployment solution tailored for Edge-Cloud Continuum, made of different landing platforms showing different peculiarities. The proposed solution is capable of finding a suitable deployment recipe for a given workflow by i) evaluating the vulnerability footprint of each platform, ii) computing the set of candidate deployment platforms, iii) finding the optimal deployment solution, and iv) migrating already deployed workflows in case the vulnerability requirement is no longer satisfied. Each workflow can be associated with a set of requirements to be satisfied by our deployment solution, like the maximum level of vulnerability footprint accepted. Each workflow deployment contributes to the vulnerability footprint of the landing platform involved. | 10.1109/TNSM.2025.3606624 |
| Anna Volkova, Julian Schmidhuber, Hermann de Meer, Jacek Rak | Design of Weather-Resilient Satellite-Terrestrial ICT Networks for Power Grid Communications | 2025 | Early Access | Power grids Satellites Meteorology Low earth orbit satellites Routing Space-air-ground integrated networks Power system dynamics Network topology Delays Topology Resilience satellite-terrestrial network power grid communication LEO satellite network | Hybrid satellite-terrestrial communication networks can enhance the resilience of power grid communications. Recent advancements in low-Earth orbit (LEO) satellite technologies have improved their ability to meet the communication requirements of power grid applications. However, the dynamic nature of LEO networks necessitates frequent routing updates, which can potentially disrupt the transmission of critical power grid monitoring and control data. Additionally, extreme weather events, such as severe rainfall, can impair both terrestrial and satellite communication links, posing risks to the operation of the power grid. This paper presents a two-phase methodology for reducing the need for frequent routing updates by identifying stable low-latency configurations of hybrid satellite-terrestrial communication networks for power grid applications. In the proactive phase, the deterministic dynamics of LEO satellite constellations are considered to generate a sequence of stable network configurations using fine-grained temporal snapshots and graph aggregation. The adaptive phase incorporates a dynamic regional weather model to update link capacities. A minimum-delay multi-commodity flow problem is solved to determine the best traffic distribution under given conditions. Simulation results show that hybrid networks with stable configurations can reduce network reconfiguration frequency by 92%. Compared to terrestrial-only networks, the hybrid network improves end-to-end delay by 65.5% and maintains approximately 80% connectivity even under extreme rainfall conditions. | 10.1109/TNSM.2025.3608855 |
| Anna Karanika, Rui Yang, Xiaojuan Ma, Jiangran Wang, Shalni Sundram, Indranil Gupta | There is More Control in Egalitarian Edge IoT Meshes | 2025 | Early Access | Internet of Things Smart devices Intelligent sensors Smart agriculture Smart buildings Monitoring Mesh networks Clouds Costs Thermostats mesh IoT edge control plane routines faulttolerance | While mesh networking for edge settings (e.g., smart buildings, farms, battlefields, etc.) has received much attention, the layer of control over such meshes remains largely centralized and cloud-based. This paper focuses on applications with commonplace sense-trigger-actuate (STA) workloads—like the abstraction of routines popular now in smart homes, but applied to larger-scale edge IoT deployments. We present CoMesh, which tackles the challenge of building a decentralized mesh-based control plane for local, non-cloud, and hubless management of sense-trigger-actuate applications. CoMesh builds atop an abstraction called the coterie, which spreads STA load in a finegrained way both across space and across time. A coterie uses a novel combination of techniques such as zero-message-exchange protocols (for fast proactive member selection), quorum-based agreement, and locality-sensitive hashing. We analyze and theoretically prove safety and liveness properties of CoMesh. Our evaluation with both a Raspberry Pi-4 deployment and largerscale simulations, using real building maps and real routine workloads, shows that CoMesh is load-balanced, fast, faulttolerant, and scalable. | 10.1109/TNSM.2025.3608796 |
| Fabian Graf, David Pauli, Michael Villnow, Thomas Watteyne | Management of 6TiSCH Networks Using CORECONF: A Clustering Use Case | 2025 | Early Access | Protocols IEEE 802.15 Standard Reliability Wireless sensor networks Runtime Wireless communication Interference Wireless fidelity Monitoring Job shop scheduling 6TiSCH CORECONF IEEE 802.15.4 Clustering | Industrial low-power wireless sensor networks demand high reliability and adaptability to cope with dynamic environments and evolving network requirements. While the 6TiSCH protocol stack provides reliable low-power communication, the CoAP Management Interface (CORECONF) for runtime management remains underutilized. In this work, we implement CORECONF and introduce clustering as a practical use case. We implement a cluster formation mechanism aligned with the Routing Protocol for Low-Power and Lossy Networks (RPL) and adjust the TSCH channel-hopping sequence within the established clusters. Two use cases are presented. First, CORECONF is used to mitigate external Wi-Fi interference by forming a cluster with a modified channel set that excludes the affected frequencies. Second, CORECONF is employed to create a priority cluster of sensor nodes that require higher reliability and reduced latency, such as those monitoring critical infrastructure in industrial settings. Simulation results show significant improvements in latency, while practical experiments demonstrate a reduction in overall network charge consumption from approximately 50 mC per hour to 23 mC per hour, by adapting the channel set within the interference-affected cluster. | 10.1109/TNSM.2025.3627112 |
| Andrea Detti, Alessandro Favale | Cost-Effective Cloud-Edge Elasticity for Microservice Applications | 2025 | Early Access | Microservice architectures Cloud computing Data centers Load management Costs Frequency modulation Delays Analytical models Edge computing Telemetry Edge Computing Microservices Applications Service Meshes | Microservice applications, composed of independent containerized components, are well-suited for hybrid cloud–edge deployments. In such environments, placing microservices at the edge can reduce latency but incurs significantly higher resource costs compared to the cloud. This paper addresses the problem of selectively replicating microservices at the edge to ensure that the average user-perceived delay remains below a configurable threshold, while minimizing total deployment cost under a pay-per-use model for CPU, memory, and network traffic. We propose a greedy placement strategy based on a novel analytical model of delay and cost, tailored to synchronous request/response applications in cloud–edge topologies with elastic resource availability. The algorithm leverages telemetry and load balancing capabilities provided by service mesh frameworks to guide edge replication decisions. The proposed approach is implemented in an open-source Kubernetes controller, the Geographical Microservice Autoplacer (GMA), which integrates seamlessly with Istio and Horizontal Pod Autoscalers. GMA automates telemetry collection, cost-aware decision making, and geographically distributed placement. Its effectiveness is demonstrated through simulation and real testbed deployment. | 10.1109/TNSM.2025.3627155 |
| Meihui Liu, Fangmin Xu, Shihui Duan, Jinyu Zhu, Wenlong Ma, Ruoyu Ji, Chenglin Zhao | Efficient SRv6 Based Multi-Path Transmission Strategy for Resilient Communication in Deterministic Computing Power Network | 2025 | Early Access | Reliability Processor scheduling Computer architecture Routing Load management Job shop scheduling Bandwidth Dynamic scheduling Ubiquitous computing Indexes Computing Power Network Deterministic Networks Multi-path Forwarding Load Balancing | The computing power network (CPN) serves as a key infrastructure for future networks, facilitating the connection of ubiquitous computing resources distributed across various locations. The continuous emergence of computation-intensive and delay-sensitive applications highlights the crucial need to fully utilize limited computing resources and the importance of building a resilient communication network. Primary-backup (PB) based transmission is a commonly used technique to enhance network reliability. However, implementing this approach in CPN with a consideration of load balancing introduces significant complexity and has received limited research attention. In this paper, we designed a deterministic computing power network (Det-CPN) architecture based on segment routing over IPv6 (SRv6). On top of the above architecture, we proposed a best computing node selection method based on a comprehensive index calculation and ranking (CICR) algorithm to determine the optimal computing node for task transmission. Subsequently, we developed a bandwidth sharing-based multi-path transmission (BSMT) algorithm to realize the maximization of the system efficiency. Simulation results demonstrate that in adverse network conditions (overloaded with a failure rate of 0.02), compared to the traditional dual-path redundant forwarding mechanism, the proposed solution achieves an average reduction of 22.3% in transmission latency, an average improvement of 39.94% in task success rate, a decrease of 19.4% in bandwidth occupation rate, and an increase of 31.05% in computing resource utilization rate. | 10.1109/TNSM.2025.3608200 |
| Rifat Al Mamun Rudro, Sultanul Arifieen Hamim, Md. Hamid Uddin, Md Masuduzzaman, Md. Manzurul Hasan | Waris-Chain: The Blockchain Driven Transformation of Inheritance Solutions | 2025 | Early Access | Blockchains Smart contracts Fraud Security Training Surveys Artificial intelligence Accuracy Servers Nonfungible tokens Smart contracts Inheritance Management Blockchain Automation Digital Inheritance Fraud Prevention | In the current digital era, managing inheritance presents a critical challenge, necessitating a balance of effectiveness, security, and transparency. Traditional processes are often complex, time-consuming, and susceptible to fraud and disputes. This paper introduces the successor chain model called Waris-Chain, a blockchain-based solution designed to streamline and secure inheritance management. Waris-Chain integrates smart contracts and Non-Fungible Tokens (NFTs) to automate and verify inheritance processes, ensuring accuracy and reducing manual intervention. Developed using the Ethereum blockchain, ERC-1155 tokens, and MetaMask for authentication, Waris-Chain offers a comprehensive, adaptable, and secure platform. Performance evaluation shows that Waris-Chain achieves a high throughput of 477.36 transactions per hour, a low transaction latency of 7.54 seconds, with a 99.42% accuracy rate and a 0.58% error rate. Despite these advancements, challenges such as blockchain adoption, legal integration, and system scalability remain, suggesting avenues for future research to fully realize blockchain’s potential in inheritance management. | 10.1109/TNSM.2025.3608088 |
| Shaocong Feng, Baojiang Cui, Junsong Fu, Meiyi Jiang, Shengjia Chang | Adaptive Target Device Model Identification Attack in 5G Mobile Network | 2025 | Early Access | Object recognition Adaptation models 5G mobile communication Atmospheric modeling Security Communication channels Mobile handsets Radio access networks Feature extraction Baseband 5G device model GUTI EPSFB UE capability | Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer. | 10.1109/TNSM.2025.3626804 |
| Zhengge Yi, Tengyao Li, Meng Zhang, Xiaoyun Yuan, Shaoyong Du, Xiangyang Luo | An Efficient Website Fingerprinting for New Websites Emerging Based on Incremental Learning | 2025 | Early Access | Incremental learning Fingerprint recognition Data models Monitoring Accuracy Deep learning Adaptation models Training Telecommunication traffic Feature extraction Website fingerprinting Tor anonymous network traffic analysis incremental learning | Website fingerprinting attacks leverage encrypted traffic features to identify specific services accessed by users within anonymity networks such as Tor. Although existing WF methods achieve high accuracy on static datasets using deep learning techniques, they struggle in dynamic environments where anonymous websites continually evolve. These methods typically require full retraining on composite datasets, resulting in substantial computational and storage burdens, and are particularly vulnerable to classification bias caused by data imbalance and concept drift. To address these challenges, we propose EIL-WF, a dynamic WF framework based on incremental learning that enables efficient adaptation to newly emerging websites without the need for full retraining. EIL-WF incrementally trains lightweight, independent classifiers for new website classes and integrates them through classifier normalization and energy alignment strategies grounded in energy-based model theory, thereby constructing a unified and robust classification model. Comprehensive experiments on two public Tor traffic datasets demonstrate that EIL-WF outperforms existing incremental learning methods by 6.2%–20.2% in identifying new websites and reduces catastrophic forgetting by 5.4%–20%. Notably, EIL-WF exhibits strong resilience against data imbalance and concept drift, maintaining stable classification performance across evolving distributions. Furthermore,EIL-WF decreases training time during model updates by 2–3 orders of magnitude, demonstrating substantial advantages over conventional full retraining paradigms. | 10.1109/TNSM.2025.3627441 |
| Xinhan Liu, Robert Kooij, Piet Van Mieghem | Node-Reliability: Monte Carlo, Laplace, and Stochastic Approximations and a Greedy Link-Augmentation Strategy | 2025 | Early Access | Reliability Polynomials Computer network reliability Monte Carlo methods Telecommunication network reliability Robustness Accuracy Probabilistic logic Training Reliability theory network robustness node failure probabilistic graph reliability polynomial | The node-reliability polynomial nRelG(p) measures the probability that a connected network remains connected given that each node functions independently with probability p. Computing node-reliability polynomials nRelG(p) exactly is NP-hard. Here we propose efficient approximations. First, we develop an accurate Monte Carlo simulation, which is accelerated by incorporating a Laplace approximation that captures the polynomial’s main behavior. We also introduce three degree-based stochastic approximations (Laplace, arithmetic, and geometric), which leverage the degree distribution to estimate nRelG(p) with low complexity. Beyond approximations, our framework addresses the reliability-based Global Robustness Improvement Problem (k-GRIP) by selecting exactly k links to add to a given graph so as to maximize its node reliability. A Greedy Lowest-Degree Pairing Link Addition (Greedy-LD) Algorithm, is proposed which offers a computationally efficient and practically effective heuristic, particularly suitable for large-scale networks. | 10.1109/TNSM.2025.3607004 |
| Leyla Sadighi, Stefan Karlsson, Carlos Natalino, Marija Furdek | ML-Based State of Polarization Analysis to Detect Emerging Threats to Optical Fiber Security | 2025 | Early Access | Optical fiber networks Optical fiber cables Optical fiber polarization Optical polarization Optical transmitters Vibrations Optical receivers Eavesdropping Anomaly detection Monitoring State of Polarization (SOP) variations Machine Learning (ML) anomaly detection Semi-Supervised Learning (SSL) Unsupervised Learning (USL) One-Class Support Vector Machine (OCSVM) Density-Based Spatial Clustering of Applications with Noise (DBSCAN) | As the foundation of global communication networks, optical fibers are vulnerable to various disruptive events, including mechanical damage, such as cuts, and malicious physical layer breaches, such as eavesdropping via fiber bending. Traditional monitoring methods often fail to identify subtle or novel anomalies, stimulating the proliferation of ML techniques for detection of threats before they cause significant harm. In this paper, we evaluate the performance of SSL and USL approaches for detecting various abnormal events, such as fiber bending and vibrations, by analyzing polarization signatures with minimal reliance on labeled data. We experimentally collect thirteen polarization signatures on three different types of fiber cable and process them using OCSVM as an SSL, and DBSCAN as a USL algorithm for anomaly detection. We introduce tailored evaluation metrics designed to guide hyper-parameter tuning and capture generalization over different anomaly types, detection consistency, and robustness to false positives, enabling practical deployment of OCSVM and DBSCAN in optical fiber security. Our findings demonstrate DBSCAN as a strong contender to detect previously unseen threats in scenarios where labeled data are not available, despite some variability in performance between different scenarios, with F1 score values between 0.615 and 0.995. In contrast, OCSVM, trained on normal operating conditions, maintains high F1 scores of 0.98 to 0.998, demonstrating accurate detection of complex anomalies in optical networks. | 10.1109/TNSM.2025.3607022 |
| Aruna Malik, Sandeep Verma, Samayveer Singh, Rajeev Kumar, Neeraj Kumar | Greylag Goose-Based Optimized Cluster Routing for IoT-Based Heterogeneous Wireless Sensor Networks | 2025 | Early Access | Wireless sensor networks Energy consumption Clustering algorithms Energy efficiency Routing Internet of Things Heuristic algorithms Sensors Genetic algorithms Throughput Internet of Things Energy efficiency Greylag Goose Optimization Cluster Head Network-lifetime | Optimization algorithms are crucial for energy-efficient routing in Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) because they help minimize energy consumption, reduce communication overhead, and improve overall network performance. By optimizing the routing paths and scheduling data transmission, these algorithms can prolong network lifetime by efficiently managing the limited energy resources of sensor nodes, ensuring reliable data delivery while conserving energy. In this work, we present Greylag Goose-based Optimized Clustering (GGOC), which aids in selecting the Cluster Head (CH) using the proposed critical fitness parameters. These parameters include residual energy, sensor sensing range, distance of a candidate node from the sink, number of neighboring nodes, and energy consumption rate. Simulation analysis shows that the proposed approach improves various performance metrics, namely network lifetime, stability period, throughput, the network’s remaining energy, and the number of clusters formed. | 10.1109/TNSM.2025.3627535 |
| Samayveer Singh, Aruna Malik, Vikas Tyagi, Rajeev Kumar, Neeraj Kumar, Shakir Khan, Mohd Fazil | Dynamic Energy Management in Heterogeneous Sensor Networks Using Hippopotamus-Inspired Clustering | 2025 | Early Access | Wireless sensor networks Clustering algorithms Optimization Heuristic algorithms Routing Energy efficiency Protocols Scalability Genetic algorithms Batteries Internet of Things Energy efficiency Cluster Head Network-lifetime | The rapid expansion of smart technologies and IoT has made Wireless Sensor Networks (WSNs) essential for real-time applications such as industrial automation, environmental monitoring, and healthcare. Despite advances in sensor node technology, energy efficiency remains a key challenge due to the limited battery life of nodes, which often operate in remote environments. Effective clustering, where Cluster Heads (CHs) manage data aggregation and transmission, is crucial for optimizing energy use. Motivated from the above, in this paper, we introduce a novel metaheuristic approach called Hippopotamus Optimization-Based Cluster Head Selection (HO-CHS), designed to enhance CH selection by dynamically considering factors such as residual energy, node location, and network topology. Inspired by natural behaviors, HO-CHS effectively balances energy loads, reduces communication distances, and boosts network scalability and reliability. The proposed scheme achieves a 35% increase in network lifetime and a 40% improvement in stability period in comparison to the other existing schemes in literature. Simulation results demonstrate that HO-CHS significantly reduces energy consumption and enhances data transmission efficiency, making it ideal for IoT-enabled consumer electronics networks requiring consistent performance and energy conservation. | 10.1109/TNSM.2025.3618766 |
| Ke Gu, Jiaqi Lei, Jingjing Tan, Xiong Li | A Verifiable Federated Learning Scheme With Privacy-Preserving in MCS | 2025 | Early Access | Federated learning Sensors Servers Security Training Protocols Privacy Homomorphic encryption Computational modeling Mobile computing Mobile crowd sensing Verifiable federated learning Privacy-preserving Sampling verification | The popularity of edge smart devices and the explosive growth of generated data have driven the development of mobile crowd sensing (MCS). Also, federated learning (FL), as a new paradigm of privacy-preserving distributed machine learning, integrates with MCS to offer a novel approach for processing large-scale edge device data. However, it also brings about many security risks. In this paper, we propose a verifiable federated learning scheme with privacy-preserving for mobile crowd sensing. In our federated learning scheme, the double-layer random mask partition method combined with homomorphic encryption is constructed to protect the local gradients and enhance system security (strong anti-collusion ability) based on the multi-cluster structure of federated learning. Also, a sampling verification mechanism is proposed to allow the mobile sensing clients to quickly and efficiently verify the correctness of their received gradient aggregation results. Further, a dropout handling mechanism is constructed to improve the robustness of mobile crowd sensing-based federated learning. Related experimental results demonstrate that our verifiable federated learning scheme is effective and efficient in mobile crowd sensing environments. | 10.1109/TNSM.2025.3627581 |
| Livia Elena Chatzieleftheriou, Jesús Pérez-Valero, Jorge Martín-Pérez, Pablo Serrano | Optimal Scaling and Offloading for Sustainable Provision of Reliable V2N Services in Dynamic and Static Scenarios | 2025 | Early Access | Ultra reliable low latency communication Delays Servers Costs Videos Reliability Vehicle dynamics Computational modeling Central Processing Unit Artificial intelligence Vehicle-to-Network V2N Ultra-reliable Low-Latency Communications URLLC Queueing Theory Algorithm design Optimization problem Asymptotic optimality | The rising popularity of Vehicle-to-Network (V2N) applications is driven by the Ultra-Reliable Low-Latency Communications (URLLC) service offered by 5G. Distributed resources can help manage heavy traffic from these applications, but complicate traffic routing under URLLCfs strict delay requirements. In this paper, we introduce the V2N Computation Offloading and CPU Activation (V2N-COCA) problem, aiming at the monetary/energetic cost minimization via computation offloading and edge/cloud CPU activation decisions, under stringent latency constraints. Some challenges are the proven nonmonotonicity of the objective function and the no-existence of closed-formulas for the sojourn time of tasks. We present a provably tight approximation for the latter, and we design BiQui, a provably asymptotically optimal and computationally efficient algorithm for the V2N-COCA problem. We then study dynamic scenarios, introducing the Swap-Prevention problem, to account for changes in the traffic load and minimize the switching on/off of CPUs without incurring into overcosts.We prove the problemfs structural properties and exploit them to design Min-Swap, a provably correct and computationally effective algorithm for the Swap-Prevention Problem. We assess both BiQui and Min-Swap over real-world vehicular traffic traces, performing a sensitivity analysis and a stress-test. Results show that (i) BiQui is nearoptimal and significantly outperforms existing solutions; and (ii) Min-Swap reduces by a ≥90% the CPU swapping incurring into just ≤0.14% extra cost. | 10.1109/TNSM.2025.3605408 |
| Guiyun Liu, Hao Li, Lihao Xiong, Zhongwei Liang, Xiaojing Zhong | Attention-Model-Based Multiagent Reinforcement Learning for Combating Malware Propagation in Internet of Underwater Things | 2025 | Early Access | Malware Mathematical models Predictive models Optimal control Prediction algorithms Adaptation models Wireless communication Optimization Network topology Vehicle dynamics Internet of Underwater Things (IoUT) Malware Fractional-order model Model-Based Reinforcement Learning (MBRL) | Malware propagation in Internet of Underwater Things (IoUT) can disrupt stable communications among wireless devices. Timely control over its spread is beneficial for the stable operation of IoUT. Notably, the instability of the underwater environment causes the propagation effects of malware to vary continuously. Traditional control methods cannot quickly adapt to these abrupt changes. In recent years, the rapid development of reinforcement learning (RL) has significantly advanced control schemes. However, previous RL methods relied on long-term interactions to obtain a large amount of interaction data in order to form effective strategy. Given the particularity of underwater communication media, data collection for RL in IoUT is challenging. Therefore, improving sample efficiency has become a critical issue that current RL methods need to address urgently. The algorithm of Attention-Model-Based Multiagent Policy Optimization (AMBMPO) is proposed to achieve efficient use of data samples in this study. First, the algorithm employs an explicit prediction model to reduce the dependence on precise model. Secondly, an attention mechanism network is designed to capture high-dimensional state sequences, thereby reducing the compound errors during policy training. Finally, the proposed method is validated for optimal control problems and compared with verified benchmarks. The experimental results show that, compared with existing advanced RL algorithms, AMBMPO demonstrates significant advantages in sample efficiency and stability. This work effectively controls the spread of malware in underwater systems through an interactive evolution based approach. It provides a new implementation approach for ensuring the safety of underwater systems in deep-sea exploration and environmental monitoring applications. | 10.1109/TNSM.2025.3628881 |
| Kai Cheng, Weidong Tang, Lintao Tan, Jing Yang, Jia Chen | SLNALog: A log Anomaly Detection Scheme Based on Swift Layer Normalization Attention Mechanism for Next-Generation Power Communication Networks | 2025 | Early Access | Anomaly detection Semantics Smart grids Feature extraction Next generation networking Data models Maintenance Computational modeling Vectors Power system stability Log anomaly detection deep learning binary classification smart grid security | Log anomaly detection is a critical first line of defense for securing next-generation power communication networks against malicious attacks.serves as the initial line of defense for safeguarding the security of the next-generation power communication networks, which can protect it from attackers invasion damage. However, in industrial settingsin the industrial Internet domain, limitedthe scarcity of computational resources on edge devices result in longin devices leads to prolonged inference times for anomaly detection models, hindering the timely detection of anomalous log activities.impeding the prompt identification of unusual activities logged within these devices. To address these challenges, we propose SLNALog, an anomaly detection workflow centered around a Swift Layer Normalization Attention module.the aforementioned issues, the SLNALog anomaly detection workflow has been proposed. Its core comprises a Swift Layer Normalization Attention module. This module leveragesis based on linear attention to optimizeand optimizes the key-value interactions found in traditional attention mechanisms, thereby reducing the computational complexity of the detection process. This optimization reduces the complexity of log anomaly detection. As a result, the model’s receptive field for log data is expanded, and the efficiency of log anomaly detection is improved. ExperimentalThe experimental results on the HDFS and BGL datasets demonstrate the superiority of our approach.indicate that it achieves higher accuracy. SLNALog achieves higher accuracy, with F1-scores increasing by 0.08 and 0.04, respectively, while reducing detection time by 5.7% and 28.3%. The model provides an effective solution to enhance the cyber security of smart grids. Furthermore, the workflow incorporates an LLM-based log template analysis module and an Adapter-based model tuning module to enhance the model’s generalization in real-world scenarios. The proposed model provides an effective solution for enhancing the cybersecurity of smart grids. | 10.1109/TNSM.2025.3605764 |
| Hojjat Navidan, Cristian Martín, Vasilis Maglogiannis, Dries Naudts, Manuel Díaz, Ingrid Moerman, Adnan Shahid | An End-to-End Digital Twin Framework for Dynamic Traffic Analytics in O-RAN | 2025 | Early Access | Open RAN Adaptation models Real-time systems Biological system modeling 5G mobile communication Predictive models Traffic control Incremental learning Anomaly detection Data models Digital Twin Generative AI Open Radio Access Networks Incremental Learning Traffic Analytics Traffic Prediction Anomaly Detection | Dynamic traffic patterns and shifts in traffic distribution in Open Radio Access Networks (O-RAN) pose a significant challenge for real-time network optimization in 5G and beyond. Traditional traffic analytics methods struggle to remain accurate under such non-stationary conditions, where models trained on historical data quickly degrade as traffic evolves. This paper introduces AIDITA, an AI-driven Digital Twin for Traffic Analytics framework designed to solve this problem through autonomous model adaptation. AIDITA creates a digital replica of the live analytics models running in the RAN Intelligent Controller (RIC) and continuously updates them within the digital twin using incremental learning. These updates use real-time Key Performance Metrics (KPMs) from the live network, augmented with synthetic data from a Generative AI (GenAI) component to simulate diverse network scenarios. Combining GenAI-driven augmentation with incremental learning enables traffic analytics models, such as prediction or anomaly detection, to adapt continuously without the need for full retraining, preserving accuracy and efficiency in dynamic environments. Implemented and validated on a real-world 5G testbed, our AIDITA framework demonstrates significant improvements in traffic prediction and anomaly detection use cases under distribution shifts, showcasing its practical effectiveness and adaptability for real-time network optimization in O-RAN deployments. | 10.1109/TNSM.2025.3628756 |
| Leonardo Lo Schiavo, Genoveva Garcia, Marco Gramaglia, Marco Fiore, Albert Banchs, Xavier Costa-Perez | The TES Framework: Joint Statistical Modeling and Machine Learning for Network KPI Forecasting | 2025 | Early Access | Predictive models Forecasting Time series analysis Adaptation models Load modeling Deep learning Autonomous networks Context modeling Accuracy Transformers Forecasting prediction mobile traffic network KPI network management neural networks statistical modeling | The vision of intelligent networks capable of automatically configuring crucial parameters for tasks such as resource provisioning, anomaly detection or load balancing largely hinges upon efficient AI-based algorithms. Time series forecasting is a fundamental building block for network-oriented AI and current trends lean towards the systematic adoption of models based on deep learning approaches. In this paper, we pave the way for a different strategy for the design of predictors for mobile network environments, and we propose the Thresholded Exponential Smoothing (TES) framework, a hybrid Statistical Modeling and Deep Learning tool that allows for improving the performance of network Key Performance Indicator (KPI) forecasting. We adapt our framework to two state-of-the-art deep learning tools for time series forecasting, based on Recurrent Neural Networks and Transformer architectures. We experiment with TES by showcasing its superior support for three practical network management use cases, i.e. (i) anticipatory allocation of network resources, (ii) mobile traffic anomaly prediction, and (iii) mobile traffic load balancing. Our results, derived from traffic measurements collected in operational mobile networks, demonstrate that the TES framework can yield substantial performance gains over current state-of-the-art predictors in the applications considered. | 10.1109/TNSM.2025.3628788 |
| F. Busacca, L. Galluccio, S. Palazzo, A. Panebianco, R. Raftopoulos | Bandits Under the Waves: A Fully-Distributed Multi-Armed Bandit Framework for Modulation Adaptation in the Internet of Underwater Things | 2025 | Early Access | Throughput Scalability Training Propagation losses Mathematical models Energy consumption Adaptation models Absorption Support vector machines Internet Underwater communications Underwater Modulation Adaptation Reinforcement Learning Multi-Player Multi-Armed Bandit | Acoustic communications are the most exploited technology in the so-called Internet of Underwater Things (IoUT). UnderWater (UW) environments are often characterized by harsh propagation features, limited bandwidth, fast-varying channel conditions, and long propagation delay. On the other hand, IoUT nodes are usually battery-powered devices with limited processing capabilities. Accordingly, it is necessary to design optimization algorithms to address the challenging propagation features while balancing them with the limited device capabilities. To address the constraints of the nodes in energy and processing resources, it is crucial to adjust the transmission parameters based on the channel conditions while also developing communication procedures that are both lightweight and energy-efficient. In this work, we introduce a novel Multi-Player Multi-Armed Bandit (MP-MAB) framework for modulation adaptation in Multi-Hop IoUT Acoustic Networks. As opposed to widely used, computation-demanding Deep Reinforcement Learning (DRL) techniques, MP-MAB algorithms are simple and lightweight and allow to iteratively make decisions by selecting one among multiple choices, or arms. The framework is fully-distributed and is able to dynamically select the best modulation technique at each IoUT node by leveraging on high-level statistics (e.g., network throughput), without the need to exploit hard-to-extract channel features (e.g., channel state). We evaluate the performance of the proposed framework using the DESERT UW simulator and compare it with state-of-the-art centralized solutions based on Deep Reinforcement Learning (DRL) for cognitive and heterogeneous networks, namely DRL-MCS, DRL-AM, PPO, SAC, as well as with a multiple-agent, distributed version of the PPO. The results highlight that, despite its simplicity and fully-distributed nature, the proposed framework achieves superior performance in UW networks in terms of throughput, convergence speed, and energy efficiency. Compared to DRL-MCS and DRL-AM, our approach improves network throughput by up to 33% and 20%, respectively, and reduces energy consumption by up to 18% and 16%. When compared to PPO, SAC, and Multi-PPO, the proposed solution achieves up to 11%, 34%, and 38% higher throughput, and up to 7%, 17%, and 33% lower energy consumption, respectively. | 10.1109/TNSM.2025.3629240 |