Last updated: 2025-12-10 05:01 UTC
All documents
Number of pages: 152
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Andrea Detti, Alessandro Favale | Cost-Effective Cloud-Edge Elasticity for Microservice Applications | 2025 | Early Access | Microservice architectures Cloud computing Data centers Load management Costs Frequency modulation Delays Analytical models Edge computing Telemetry Edge Computing Microservices Applications Service Meshes | Microservice applications, composed of independent containerized components, are well-suited for hybrid cloud–edge deployments. In such environments, placing microservices at the edge can reduce latency but incurs significantly higher resource costs compared to the cloud. This paper addresses the problem of selectively replicating microservices at the edge to ensure that the average user-perceived delay remains below a configurable threshold, while minimizing total deployment cost under a pay-per-use model for CPU, memory, and network traffic. We propose a greedy placement strategy based on a novel analytical model of delay and cost, tailored to synchronous request/response applications in cloud–edge topologies with elastic resource availability. The algorithm leverages telemetry and load balancing capabilities provided by service mesh frameworks to guide edge replication decisions. The proposed approach is implemented in an open-source Kubernetes controller, the Geographical Microservice Autoplacer (GMA), which integrates seamlessly with Istio and Horizontal Pod Autoscalers. GMA automates telemetry collection, cost-aware decision making, and geographically distributed placement. Its effectiveness is demonstrated through simulation and real testbed deployment. | 10.1109/TNSM.2025.3627155 |
| Hesam Tajbakhsh, Ricardo Parizotto, Alberto Schaeffer-Filho, Israat Haque | Reinforcement Learning-Based In-Network Load Balancing | 2025 | Early Access | Load management Servers Load modeling Prediction algorithms Data centers Q-learning Predictive models Mathematical models Data models Computational modeling Load Balancing Data Plane Programmability Reinforcement Learning | Ensuring consistent performance becomes increasingly challenging with the growing complexity of applications in data centers. This is where load balancing emerges as a vital component. A load balancer distributes network or application traffic across various servers, resources, or pathways. In this article, we present P4WISE, a load balancer designed for software-defined networks. Operating on both the data and control planes, it employs reinforcement learning to distribute computational loads with granularity at inter and intra-server levels. Evaluation results demonstrate a remarkable 90% accuracy in predicting the optimal load balancing strategy of P4WISE in dynamic scenarios. Notably, unlike supervised or unsupervised methods, it eliminates the need for retraining when the environment undergoes minor or major changes. Instead, P4WISE autonomously adjusts and retrains itself based on observed states within the data center. | 10.1109/TNSM.2025.3621126 |
| Zheng Gao, Danfeng Sun, Jianyong Zhao, Huifeng Wu, Jia Wu | Cost-Minimized Data Edge Access Model for Digital Twin Using Cloud-Edge Collaboration | 2025 | Early Access | Data acquisition Cloud computing Digital twins Costs Edge computing Accuracy Optimization Data models Computational modeling Protocols Data edge access digital twin cloud-edge collaboration edge cost minimization | Industrial applications involving digital twins (e.g., behavior simulation) demand highly accurate, low-latency data, making real-time data acquisition critical. To meet performance demands, devices that do not support asynchronous communication need to acquire data at high frequency. In cloud-edge collaboration schemes, edge computing nodes typically acquire the data. However, high-frequency data acquisition and processing impose considerable costs, posing significant challenges for these resource-constrained nodes. To address this problem, we propose a model called Cost-minimized Data Edge Access (CDEA) that can dynamically minimize the edge costs while satisfying long-term performance requirements. CDEA quantifies data performance by decomposing the workflow of industrial systems into basic action units. These units are used to model data acquisition, data processing, data transmission, and cloud computing. Then, a cost minimization problem is formulated based on these components. To address irregular data changes and the general lack of available statistics on system’s network status, the framework incorporates Lyapunov optimization to transform the long-term guarantee over data performance into a series of instantaneous decision problems. Finally, a heuristic algorithm identifies the optimal data acquisition strategy. To validate CDEA’s effectiveness, we implemented it in two representative digital twin scenarios: cathode plate stripping and AGV transportation. Experimental results demonstrate that CDEA can indeed reduce both edge costs and cloud resources consumption while still ensuring high data performance. | 10.1109/TNSM.2025.3621548 |
| Seyed Soheil Johari, Massimo Tornatore, Nashid Shahriar, Raouf Boutaba, Aladdin Saleh | Active Learning for Transformer-Based Fault Diagnosis in 5G and Beyond Mobile Networks | 2025 | Early Access | Transformers Fault diagnosis Data models Labeling Training 5G mobile communication Costs Computer architecture Active learning Complexity theory Fault Diagnosis Active Learning Transformers | As 5G and beyond mobile networks evolve, their increasing complexity necessitates advanced, automated, and datadriven fault diagnosis methods. While traditional data-driven methods falter with modern network complexities, Transformer models have proven highly effective for fault diagnosis through their efficient processing of sequential and time-series data. However, these Transformer-based methods demand substantial labeled data, which is costly to obtain. To address the lack of labeled data, we propose a novel active learning (AL) approach designed for Transformer-based fault diagnosis, tailored to the time-series nature of network data. AL reduces the need for extensive labeled datasets by iteratively selecting the most informative samples for labeling. Our AL method exploits the interpretability of Transformers, using their attention weights to create dependency graphs that represent processing patterns of data points. By formulating a one-class novelty detection problem on these graphs, we identify whether an unlabeled sample is processed differently from labeled ones in the previous training cycle and designate novel samples for expert annotation. Extensive experiments on real-world datasets show that our AL method achieves higher F1-scores than state-of-the-art AL algorithms with 50% fewer labeled samples and surpasses existing methods by up to 150% in identifying samples related to unseen fault types. | 10.1109/TNSM.2025.3622149 |
| Jingchao Tan, Tiancheng Zhang, Cheng Zhang, Chenyang Wang, Chao Qiu, Xiaofei Wang, Mohsen Guizani | Delay-Aware and Energy-Efficient Integrated Optimization System for 5G Networks | 2025 | Early Access | 5G mobile communication Delays Energy efficiency Energy consumption Optimization Quality of service Resource management Heuristic algorithms Spatiotemporal phenomena Base stations Energy Efficient Delay Aware Deep Reinforcement Learning 5G Networks | To meet the demands of high-capacity and low-delay services, Fifth Generation (5G) Base Stations (BSs) are typically deployed in ultra-dense configurations, especially in urban areas. While this densification enhances coverage and service quality, it also leads to substantially increased energy consumption. However, the dense deployment pattern makes BS workloads more responsive to the spatiotemporal variations in user behavior, offering opportunities for energy-saving strategies that dynamically adjust BS operation states. In this context, we propose a Delay-aware and Energy-efficient Integrated Optimization System (DEIS) based on Deep Reinforcement Learning (DRL), which jointly optimizes energy consumption and network delay while maintaining user satisfaction. DEIS leverages a real-world dataset collected from operational 5G BSs provided by partner network operators, containing both BS deployment data and high-volume user request logs. Extensive simulations demonstrate that DEIS can achieve a 41% reduction in energy consumption while ensuring reliable delay performance. | 10.1109/TNSM.2025.3623778 |
| Akhila Rao, Magnus Boman | Self-Supervised Pretraining for User Performance Prediction Under Scarce Data Conditions | 2025 | Early Access | Generators Training Self-supervised learning Predictive models Noise Data models Data augmentation Base stations Vectors Adaptation models user performance prediction telecom networks mobile networks machine learning self-supervised learning structured data tabular data generalizability sample efficiency | Predicting user performance at the base station in telecom networks is a critical task that can significantly benefit from advanced machine learning techniques. However, labeled data for user performance are scarce and costly to collect, while unlabeled data consisting of base station metrics, are more readily accessible. Self-supervised learning provides a means to leverage this unlabeled data, and has seen remarkable success in the domains of computer vision and natural language processing, with unstructured data. Recently, these methods have been adapted to structured data as well, making them particularly relevant to the telecom domain. We apply self-supervised learning to predict user performance in telecom networks. Our results demonstrate that even with simple self-supervised approaches, the percentage of variance of the target values explained by the model, in low-labeled scenarios (e.g., only 100 labeled samples) can be improved fourfold, from 15% to 60%. Moreover, to promote reproducibility and further research in the domain, we open-source a dataset creation framework and a specific dataset created from it that captures scenarios that have been deemed to be challenging for future networks. | 10.1109/TNSM.2025.3622892 |
| Abdurrahman Elmaghbub, Bechir Hamdaoui | HEEDFUL: Leveraging Sequential Transfer Learning for Robust WiFi Device Fingerprinting Amid Hardware Warm-Up Effects | 2025 | Early Access | Fingerprint recognition Radio frequency Hardware Wireless fidelity Accuracy Performance evaluation Training Wireless communication Estimation Transfer learning WiFi Device Fingerprinting Hardware Warm-up Consideration Hardware Impairment Estimation Sequential Transfer Learning Temporal-Domain Adaptation | Deep Learning-based RF fingerprinting approaches struggle to perform well in cross-domain scenarios, particularly during hardware warm-up. This often-overlooked vulnerability has been jeopardizing their reliability and their adoption in practical settings. To address this critical gap, in this work, we first dive deep into the anatomy of RF fingerprints, revealing insights into the temporal fingerprinting variations during and post hardware stabilization. Introducing HEEDFUL, a novel framework harnessing sequential transfer learning and targeted impairment estimation, we then address these challenges with remarkable consistency, eliminating blind spots even during challenging warm-up phases. Our evaluation showcases HEEDFULs efficacy, achieving remarkable classification accuracies of up to 96% during the initial device operation intervals–far surpassing traditional models. Furthermore, cross-day and crossprotocol assessments confirm HEEDFUL’s superiority, achieving and maintaining high accuracy during both the stable and initial warm-up phases when tested on WiFi signals. Additionally, we release WiFi type B and N RF fingerprint datasets that, for the first time, incorporate both the time-domain representation and real hardware impairments of the frames. This underscores the importance of leveraging hardware impairment data, enabling a deeper understanding of fingerprints and facilitating the development of more robust RF fingerprinting solutions. | 10.1109/TNSM.2025.3624126 |
| Giovanni Simone Sticca, Memedhe Ibrahimi, Francesco Musumeci, Nicola Di Cicco, Massimo Tornatore | Hollow-Core Fibers for Latency-Constrained and Low-Cost Edge Data Center Networks | 2025 | Early Access | Optical fiber networks Costs Optical fiber communication Data centers Optical fiber devices Optical fibers Optical attenuators Network topology Fiber nonlinear optics Throughput Hollow Core Fiber edge Data Centers Network Cost Minimization Latency-Constrained Networks | Recent advancements in Hollow Core Fibers (HCF) production are paving the way toward new ground-breaking opportunities of HCF for 6G-and-beyond applications. While Standard Single-Mode Fibers (SSMF) have been the go-to solution in optical communications for the past 50 years, HCF is expected to be a turning point in how next-generation optical networks are planned and designed. Compared to SSMF, in which the optical signal is transmitted in a silica core, in HCF, the optical signal is transmitted in a hollow, i.e., air, core, significantly reducing latency (by 30%), while also decreasing attenuation (as low as 0.11 dB/km) and non-linearities. In this study, we investigate the optimal placement of HCF in latency-constrained optical networks to minimize the number of edge Data Centers (edgeDCs), while also ensuring physical-layer validation. Given the optimized placement of HCF and edgeDCs, we minimize the overall network cost in terms of transponders (TXPs) and Wavelength Selective Switches (WSSes) by optimizing the type, number, and transmission mode of TXPs, and the type and number of WSSes. We develop a Mixed Integer Nonlinear Programming (MINLP) model and a Genetic Algorithm (GA) to solve these problems. We validate the GA against the MINLP model in four synthetically generated topologies and perform extensive numerical evaluations in a realistic 25-node metro aggregation topology and a 22-node national topology. We show that by upgrading 25% of the links to HCF, we can significantly reduce the number of edgeDCs by up to 40%, while also reducing network equipment cost by up to 38%, compared to an SSMF-only network. | 10.1109/TNSM.2025.3625391 |
| Manjuluri Anil Kumar, Balaprakasa Rao Killi, Eiji Oki | Generative Adversarial Networks Based Low-Rate Denial of Service Attack Detection and Mitigation in Software-Defined Networks | 2025 | Early Access | Protocols Prevention and mitigation Real-time systems Software defined networking Generative adversarial networks Anomaly detection Denial-of-service attack TCP Routing Training LDoS SDN GAN attack detection and mitigation OpenFlow | Low-rate Denial of Service (LDoS) attacks use short, regular bursts of traffic to exploit vulnerabilities in network protocols. They are a major threat to network security, especially in Software-Defined Networking (SDN) frameworks. These attacks are challenging to detect and mitigate because of their low traffic volume, making it impossible to distinguish them from normal traffic. We propose a real-time LDoS attack detection and mitigation framework that can protect SDN. The framework incorporates a detection module that uses a deep learning model, such as a Generative Adversarial Network (GAN), to identify the attack. An efficient mitigation module follows detection, employing mechanisms to identify and filter harmful flows in real time. Deploying the framework into SDN controllers guarantees compliance with OpenFlow standards, thereby avoiding the necessity for additional hardware. Experimental results demonstrate that the proposed system achieves a detection accuracy of over 99.98% with an average response time of 8.58 s, significantly outperforming traditional LDoS detection approaches. This study presents a scalable, real-time methodology to enhance SDN resilience against LDoS attacks. | 10.1109/TNSM.2025.3625278 |
| Anurag Dutta, Sangita Roy, Rajat Subhra Chakraborty | RISK-4-Auto: Residually Interconnected and Superimposed Kolmogorov-Arnold Networks for Automotive Network Traffic Classification | 2025 | Early Access | Telecommunication traffic Accuracy Visualization Controller area networks Intrusion detection Histograms Generative adversarial networks Convolutional neural networks Automobiles Training Controller Area Network (CAN) In-Vehicle Security Kolmogorov-Arnold Network (KAN) Network Forensics Network Traffic classification | In modern automobiles, a Controller Area Network (CAN) bus facilitates communication among all electronic control units for critical safety functions, including steering, braking, and fuel injection. However, due to the lack of security features, it may be vulnerable to malicious bus traffic-based attacks that cause the automobile to malfunction. Such malicious bus traffic can be the result of either external fabricated messages or direct injection through the on-board diagnostic port, highlighting the need for an effective intrusion detection system to efficiently identify suspicious network flows and potential intrusions. This work introduces Residually Interconnected and Superimposed Kolmogorov-Arnold Networks (RISK-4-Auto), a set of four deep neural network architectures for intrusion detection targeting in-vehicle network traffic classification. RISK-4-Auto models, when applied on three hexadecimally identifiable sequence-based open-source datasets (collected through direct injection in the on-board diagnostic port), outperform six state-of-the-art vehicular network intrusion detection systems (as per their accuracies) by ≈1.0163% for all-class classification and ≈2.5535% on focused (single-class) malicious flow detection. Additionally, RISK-4-Auto enjoys a significantly lower overhead than existing state-of-the-art models, and is suitable for real-time deployment in resource-constrained automotive environments. | 10.1109/TNSM.2025.3625404 |
| Ning Zhao, Dongke Zhao, Huiyan Zhang, Yongchao Liu, Liang Zhang | Resilient Dynamic Event-Triggered Fuzzy Tracking Control for Nonlinear Systems Under Hybrid Attacks | 2025 | Early Access | Event detection Fuzzy systems Denial-of-service attack Stability analysis Nonlinear systems Communication channels Wireless networks Resists Multi-agent systems Fuzzy sets Takagi–Sugeno fuzzy systems deception attacks denial-of-service attacks tracking control resilient event-triggered strategy | This article investigates the issue of event-triggered tracking control for Takagi–Sugeno fuzzy systems subject to hybrid attacks. First, the deception attacks occurring on the feedback channel are considered using a Bernoulli process, in which an attacker injects state-dependent malicious signals. Next, the minimal ‘silent’ and maximal ‘active’ periods are defined to describe the duration of aperiodic denial-of-service (DoS) attacks. To take advantage of communication bandwidth and resist DoS attacks, a sampled data-based resilient dynamic event-triggered strategy is designed. Then, an event-based fuzzy tracking controller is designed to guarantee the stability of error system under hybrid attacks. Subsequently, sufficient conditions for the stability analysis are proposed by utilizing a fuzzy-basis-dependent Lyapunov-Krasovskii functional. Meanwhile, the control gains and event-triggering parameters are co-designed by applying linear matrix inequalities. Furthermore, the proposed method is extended to address the tracking control problem of multi-agent systems. Finally, the feasibility of the presented approach is validated by two examples. | 10.1109/TNSM.2025.3625395 |
| Samayveer Singh, Aruna Malik, Vikas Tyagi, Rajeev Kumar, Neeraj Kumar, Shakir Khan, Mohd Fazil | Dynamic Energy Management in Heterogeneous Sensor Networks Using Hippopotamus-Inspired Clustering | 2025 | Early Access | Wireless sensor networks Clustering algorithms Optimization Heuristic algorithms Routing Energy efficiency Protocols Scalability Genetic algorithms Batteries Internet of Things Energy efficiency Cluster Head Network-lifetime | The rapid expansion of smart technologies and IoT has made Wireless Sensor Networks (WSNs) essential for real-time applications such as industrial automation, environmental monitoring, and healthcare. Despite advances in sensor node technology, energy efficiency remains a key challenge due to the limited battery life of nodes, which often operate in remote environments. Effective clustering, where Cluster Heads (CHs) manage data aggregation and transmission, is crucial for optimizing energy use. Motivated from the above, in this paper, we introduce a novel metaheuristic approach called Hippopotamus Optimization-Based Cluster Head Selection (HO-CHS), designed to enhance CH selection by dynamically considering factors such as residual energy, node location, and network topology. Inspired by natural behaviors, HO-CHS effectively balances energy loads, reduces communication distances, and boosts network scalability and reliability. The proposed scheme achieves a 35% increase in network lifetime and a 40% improvement in stability period in comparison to the other existing schemes in literature. Simulation results demonstrate that HO-CHS significantly reduces energy consumption and enhances data transmission efficiency, making it ideal for IoT-enabled consumer electronics networks requiring consistent performance and energy conservation. | 10.1109/TNSM.2025.3618766 |
| Fabian Graf, David Pauli, Michael Villnow, Thomas Watteyne | Management of 6TiSCH Networks Using CORECONF: A Clustering Use Case | 2025 | Early Access | Protocols IEEE 802.15 Standard Reliability Wireless sensor networks Runtime Wireless communication Interference Wireless fidelity Monitoring Job shop scheduling 6TiSCH CORECONF IEEE 802.15.4 Clustering | Industrial low-power wireless sensor networks demand high reliability and adaptability to cope with dynamic environments and evolving network requirements. While the 6TiSCH protocol stack provides reliable low-power communication, the CoAP Management Interface (CORECONF) for runtime management remains underutilized. In this work, we implement CORECONF and introduce clustering as a practical use case. We implement a cluster formation mechanism aligned with the Routing Protocol for Low-Power and Lossy Networks (RPL) and adjust the TSCH channel-hopping sequence within the established clusters. Two use cases are presented. First, CORECONF is used to mitigate external Wi-Fi interference by forming a cluster with a modified channel set that excludes the affected frequencies. Second, CORECONF is employed to create a priority cluster of sensor nodes that require higher reliability and reduced latency, such as those monitoring critical infrastructure in industrial settings. Simulation results show significant improvements in latency, while practical experiments demonstrate a reduction in overall network charge consumption from approximately 50 mC per hour to 23 mC per hour, by adapting the channel set within the interference-affected cluster. | 10.1109/TNSM.2025.3627112 |
| Jiaen Lv, Yifang Zhang, Shaowei Wang | VIDTRA: An Efficient and Resilient Video Preloading System | 2025 | Early Access | Videos Trajectory Codecs Predictive models Computational modeling Data models Bit rate Real-time systems Memory Atmospheric modeling Network situation map trajectory prediction video preloading | With the increasing demand for video-based applications, the clarity and fluidity of videos have garnered widespread attention. Current video playback mechanisms either allocate resources to clients with good channel quality or maintain video playback continuity at low bit rates, affecting the user viewing experience. In this work, we propose a video preloading system, namely VIDTRA, which departs from the current paradigm and explores a resilient design scheme. The central insight in VIDTRA is to utilize network situation maps and user trajectory prediction to forecast the serving cell and received signal strength, thereby determining the duration of video preloading. Considering the predictability of public transport route trajectories, our system is primarily designed to enhance the video watching experience for users on public transport, which encompasses functionalities such as route clustering and the identification of users’ boarding and alighting. Using real-world data collected from user equipments, we thoroughly evaluate and demonstrate the efficacy of VIDTRA. Results from the experimental evaluations show that VIDTRA can precisely estimate the future signal strength received by users and initiate video preloading before they enter areas with weak signal quality, thus reducing the interruptions of the video while guaranteeing the high definition. | 10.1109/TNSM.2025.3620295 |
| Shaocong Feng, Baojiang Cui, Junsong Fu, Meiyi Jiang, Shengjia Chang | Adaptive Target Device Model Identification Attack in 5G Mobile Network | 2025 | Early Access | Object recognition Adaptation models 5G mobile communication Atmospheric modeling Security Communication channels Mobile handsets Radio access networks Feature extraction Baseband 5G device model GUTI EPSFB UE capability | Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer. | 10.1109/TNSM.2025.3626804 |
| Zhengge Yi, Tengyao Li, Meng Zhang, Xiaoyun Yuan, Shaoyong Du, Xiangyang Luo | An Efficient Website Fingerprinting for New Websites Emerging Based on Incremental Learning | 2025 | Early Access | Incremental learning Fingerprint recognition Data models Monitoring Accuracy Deep learning Adaptation models Training Telecommunication traffic Feature extraction Website fingerprinting Tor anonymous network traffic analysis incremental learning | Website fingerprinting attacks leverage encrypted traffic features to identify specific services accessed by users within anonymity networks such as Tor. Although existing WF methods achieve high accuracy on static datasets using deep learning techniques, they struggle in dynamic environments where anonymous websites continually evolve. These methods typically require full retraining on composite datasets, resulting in substantial computational and storage burdens, and are particularly vulnerable to classification bias caused by data imbalance and concept drift. To address these challenges, we propose EIL-WF, a dynamic WF framework based on incremental learning that enables efficient adaptation to newly emerging websites without the need for full retraining. EIL-WF incrementally trains lightweight, independent classifiers for new website classes and integrates them through classifier normalization and energy alignment strategies grounded in energy-based model theory, thereby constructing a unified and robust classification model. Comprehensive experiments on two public Tor traffic datasets demonstrate that EIL-WF outperforms existing incremental learning methods by 6.2%–20.2% in identifying new websites and reduces catastrophic forgetting by 5.4%–20%. Notably, EIL-WF exhibits strong resilience against data imbalance and concept drift, maintaining stable classification performance across evolving distributions. Furthermore,EIL-WF decreases training time during model updates by 2–3 orders of magnitude, demonstrating substantial advantages over conventional full retraining paradigms. | 10.1109/TNSM.2025.3627441 |
| Aruna Malik, Sandeep Verma, Samayveer Singh, Rajeev Kumar, Neeraj Kumar | Greylag Goose-Based Optimized Cluster Routing for IoT-Based Heterogeneous Wireless Sensor Networks | 2025 | Early Access | Wireless sensor networks Energy consumption Clustering algorithms Energy efficiency Routing Internet of Things Heuristic algorithms Sensors Genetic algorithms Throughput Internet of Things Energy efficiency Greylag Goose Optimization Cluster Head Network-lifetime | Optimization algorithms are crucial for energy-efficient routing in Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) because they help minimize energy consumption, reduce communication overhead, and improve overall network performance. By optimizing the routing paths and scheduling data transmission, these algorithms can prolong network lifetime by efficiently managing the limited energy resources of sensor nodes, ensuring reliable data delivery while conserving energy. In this work, we present Greylag Goose-based Optimized Clustering (GGOC), which aids in selecting the Cluster Head (CH) using the proposed critical fitness parameters. These parameters include residual energy, sensor sensing range, distance of a candidate node from the sink, number of neighboring nodes, and energy consumption rate. Simulation analysis shows that the proposed approach improves various performance metrics, namely network lifetime, stability period, throughput, the network’s remaining energy, and the number of clusters formed. | 10.1109/TNSM.2025.3627535 |
| Ke Gu, Jiaqi Lei, Jingjing Tan, Xiong Li | A Verifiable Federated Learning Scheme With Privacy-Preserving in MCS | 2025 | Early Access | Federated learning Sensors Servers Security Training Protocols Privacy Homomorphic encryption Computational modeling Mobile computing Mobile crowd sensing Verifiable federated learning Privacy-preserving Sampling verification | The popularity of edge smart devices and the explosive growth of generated data have driven the development of mobile crowd sensing (MCS). Also, federated learning (FL), as a new paradigm of privacy-preserving distributed machine learning, integrates with MCS to offer a novel approach for processing large-scale edge device data. However, it also brings about many security risks. In this paper, we propose a verifiable federated learning scheme with privacy-preserving for mobile crowd sensing. In our federated learning scheme, the double-layer random mask partition method combined with homomorphic encryption is constructed to protect the local gradients and enhance system security (strong anti-collusion ability) based on the multi-cluster structure of federated learning. Also, a sampling verification mechanism is proposed to allow the mobile sensing clients to quickly and efficiently verify the correctness of their received gradient aggregation results. Further, a dropout handling mechanism is constructed to improve the robustness of mobile crowd sensing-based federated learning. Related experimental results demonstrate that our verifiable federated learning scheme is effective and efficient in mobile crowd sensing environments. | 10.1109/TNSM.2025.3627581 |
| Guiyun Liu, Hao Li, Lihao Xiong, Zhongwei Liang, Xiaojing Zhong | Attention-Model-Based Multiagent Reinforcement Learning for Combating Malware Propagation in Internet of Underwater Things | 2025 | Early Access | Malware Mathematical models Predictive models Optimal control Prediction algorithms Adaptation models Wireless communication Optimization Network topology Vehicle dynamics Internet of Underwater Things (IoUT) Malware Fractional-order model Model-Based Reinforcement Learning (MBRL) | Malware propagation in Internet of Underwater Things (IoUT) can disrupt stable communications among wireless devices. Timely control over its spread is beneficial for the stable operation of IoUT. Notably, the instability of the underwater environment causes the propagation effects of malware to vary continuously. Traditional control methods cannot quickly adapt to these abrupt changes. In recent years, the rapid development of reinforcement learning (RL) has significantly advanced control schemes. However, previous RL methods relied on long-term interactions to obtain a large amount of interaction data in order to form effective strategy. Given the particularity of underwater communication media, data collection for RL in IoUT is challenging. Therefore, improving sample efficiency has become a critical issue that current RL methods need to address urgently. The algorithm of Attention-Model-Based Multiagent Policy Optimization (AMBMPO) is proposed to achieve efficient use of data samples in this study. First, the algorithm employs an explicit prediction model to reduce the dependence on precise model. Secondly, an attention mechanism network is designed to capture high-dimensional state sequences, thereby reducing the compound errors during policy training. Finally, the proposed method is validated for optimal control problems and compared with verified benchmarks. The experimental results show that, compared with existing advanced RL algorithms, AMBMPO demonstrates significant advantages in sample efficiency and stability. This work effectively controls the spread of malware in underwater systems through an interactive evolution based approach. It provides a new implementation approach for ensuring the safety of underwater systems in deep-sea exploration and environmental monitoring applications. | 10.1109/TNSM.2025.3628881 |
| Hojjat Navidan, Cristian Martín, Vasilis Maglogiannis, Dries Naudts, Manuel Díaz, Ingrid Moerman, Adnan Shahid | An End-to-End Digital Twin Framework for Dynamic Traffic Analytics in O-RAN | 2025 | Early Access | Open RAN Adaptation models Real-time systems Biological system modeling 5G mobile communication Predictive models Traffic control Incremental learning Anomaly detection Data models Digital Twin Generative AI Open Radio Access Networks Incremental Learning Traffic Analytics Traffic Prediction Anomaly Detection | Dynamic traffic patterns and shifts in traffic distribution in Open Radio Access Networks (O-RAN) pose a significant challenge for real-time network optimization in 5G and beyond. Traditional traffic analytics methods struggle to remain accurate under such non-stationary conditions, where models trained on historical data quickly degrade as traffic evolves. This paper introduces AIDITA, an AI-driven Digital Twin for Traffic Analytics framework designed to solve this problem through autonomous model adaptation. AIDITA creates a digital replica of the live analytics models running in the RAN Intelligent Controller (RIC) and continuously updates them within the digital twin using incremental learning. These updates use real-time Key Performance Metrics (KPMs) from the live network, augmented with synthetic data from a Generative AI (GenAI) component to simulate diverse network scenarios. Combining GenAI-driven augmentation with incremental learning enables traffic analytics models, such as prediction or anomaly detection, to adapt continuously without the need for full retraining, preserving accuracy and efficiency in dynamic environments. Implemented and validated on a real-world 5G testbed, our AIDITA framework demonstrates significant improvements in traffic prediction and anomaly detection use cases under distribution shifts, showcasing its practical effectiveness and adaptability for real-time network optimization in O-RAN deployments. | 10.1109/TNSM.2025.3628756 |