Last updated: 2025-11-22 05:01 UTC
All documents
Number of pages: 151
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Frkei Saleh, Abraham O. Fapojuwo, Diwakar Krishnamurthy | eSlice: Elastic Inter-Slice Resource Allocation for Smart City Applications | 2025 | Early Access | Resource management Smart cities 5G mobile communication Ultra reliable low latency communication Dynamic scheduling Substrates Network slicing Heuristic algorithms Real-time systems Vehicle dynamics Inter-slice Network slicing Smart city applications Dynamic Resource allocation | Network slicing is a fundamental enabler for the advancement of fifth generation (5G) and beyond 5G (B5G) networks, offering customized service-level agreements (SLAs) for distinct slices such as enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communication (URLLC). However, smart city applications often require multiple slices concurrently, posing significant challenges in resource allocation, service isolation, and maintaining performance guarantees. This paper presents eSlice, an elastic inter-slice resource allocation mechanism specifically designed to address the dynamic requirements of smart city applications. eSlice organizes applications into hierarchical slices, leveraging cloud-native resource scaling to dynamically adapt to real-time demands. It integrates two novel algorithms: the Proactive eSlice Allocation Algorithm (PeSAA), which ensures the fair distribution of resources across the substrate network, and the Reactive eSlice Allocation Algorithm (ReSAA), which employs Multi-Agent Reinforcement Learning (MARL) to dynamically coordinate, reallocate, and recover unused resources as network conditions evolve. Experimental results demonstrate that eSlice significantly outperforms existing methods, achieving 94.3% resource utilization in simulation-based experiments under constrained urban-scale scenarios, providing a robust solution for dynamic resource management in 5G-enabled smart city networks. | 10.1109/TNSM.2025.3604352 |
| Chongxiang Yao, Chen Guo, Weiguang Zhang, Shengbo Chen | Efficient Optimization Algorithm for Virtual Backbone in Wireless Sensor Networks by Removing Redundant Dominators | 2025 | Early Access | Approximation algorithms Wireless sensor networks Optimization Upper bound Classification algorithms Redundancy Energy consumption Electronic mail Storms Simulation Virtual backbone connected dominating set redundant dominator wireless sensor network approximation algorithm | Wireless sensor networks (WSNs) often utilize virtual backbones (VBs) to optimize routing and reduce energy consumption. The effectiveness of this optimization largely depends on the size of the VB, with smaller VBs offering better performance. In WSNs, VBs are typically modeled as connected dominating sets (CDSs) within unit disk graphs (UDGs). However, existing approximation algorithms for constructing the minimum connected dominating set (MCDS) often introduce redundant dominators, leading to inflated CDSs. To tackle this issue, in this paper, we propose a general CDS optimization algorithm named OP-CDS, designed specifically to minimize redundancies. Theoretical analysis shows that the size of the optimized CDS is bounded by α∙opt+δ-k+1, where α∙opt+δ represents the upper bound of the unoptimized CDS, and k denotes the number of OP-CDS iterations. Additionally, extensive simulations demonstrate that OP-CDS can effectively optimize the CDS generated by state-of-the-art algorithms with minimal time consumption. | 10.1109/TNSM.2025.3606864 |
| Ruslan Bondaruc, Nicolas Schnepf, Rémi Badonnel, Claudio A. Ardagna, Marco Anisetti | Vulnerability-Aware Secure Service Deployment in Cloud-Edge Continuum | 2025 | Early Access | Security Cloud computing Quality of service Heuristic algorithms Software Edge computing Internet of Things Resource management Computational modeling Real-time systems Service Deployment Non-Functional Properties Edge-Cloud Continuum Vulnerability Assessment | Software weaknesses and vulnerabilities are continuously discovered and rapidly evolving. Their direct and indirect interference with the business process workflow execution is neither fully understood nor addressed by the current literature. The strict control of the vulnerability footprint of the landing platform before cloud/web service workflow execution is nowadays largely used as a prevention measure in order to improve execution trustworthiness. The vulnerability footprint governance is exacerbated by the cloud, where a common execution platform hosting (vulnerable) services is shared between different tenants. The paper proposes a service workflow deployment solution tailored for Edge-Cloud Continuum, made of different landing platforms showing different peculiarities. The proposed solution is capable of finding a suitable deployment recipe for a given workflow by i) evaluating the vulnerability footprint of each platform, ii) computing the set of candidate deployment platforms, iii) finding the optimal deployment solution, and iv) migrating already deployed workflows in case the vulnerability requirement is no longer satisfied. Each workflow can be associated with a set of requirements to be satisfied by our deployment solution, like the maximum level of vulnerability footprint accepted. Each workflow deployment contributes to the vulnerability footprint of the landing platform involved. | 10.1109/TNSM.2025.3606624 |
| Kang Liu, Yaru Fu, Guangping Xu, Wenguang Zheng, Mingyuan Ding, Yulei Wu, Tony Q.S. Quek | Backhaul Traffic-Aware Edge Caching for Recommended Content With Personalized Privacy | 2025 | Early Access | Privacy Noise Backhaul networks Servers Differential privacy Protection Recommender systems Optimization Mathematical models Hidden Markov models Differential privacy edge caching edge computing recommendation systems | Caching recommended contents at the network edge can effectively alleviate the traffic pressure of the backbone network and significantly improve user experience. However, highly personalized and precise recommendations often rely on leveraging more user request records, raising serious privacy concerns. Existing recommendation-aware edge caching mechanisms typically apply a fixed level of privacy protection, without considering the personalized privacy of users. This one-size-fits-all approach often introduces significant noise, adversely impacting cache hit ratio (CHR). In this work, we propose a differential privacy-based edge caching framework supporting personalized privacy-preserving to address these challenges. We formulate a CHR maximization problem under personalized privacy constraints and reveal the NP-completeness of the problem with a rigorous mathematical proof. Subsequently, we mathematically model the relationship between personalized privacy and user preference distortion, analyzing its impact on recommendations and user requests. To solve it, we introduce an efficient heuristic algorithm named the Backhaul Traffic-Aware Caching Algorithm. This algorithm utilizes backhaul traffic as a feedback signal to make accurate caching decisions, enabling adaptive optimization of caching decisions by perceiving the impact of noise and low-quality recommendations. Extensive experiments on two typical real-world datasets validate the effectiveness of our framework, demonstrating its ability to enhance privacy protection while simultaneously improving CHR. | 10.1109/TNSM.2025.3606544 |
| Livia Elena Chatzieleftheriou, Jesús Pérez-Valero, Jorge Martín-Pérez, Pablo Serrano | Optimal Scaling and Offloading for Sustainable Provision of Reliable V2N Services in Dynamic and Static Scenarios | 2025 | Early Access | Ultra reliable low latency communication Delays Servers Costs Videos Reliability Vehicle dynamics Computational modeling Central Processing Unit Artificial intelligence Vehicle-to-Network V2N Ultra-reliable Low-Latency Communications URLLC Queueing Theory Algorithm design Optimization problem Asymptotic optimality | The rising popularity of Vehicle-to-Network (V2N) applications is driven by the Ultra-Reliable Low-Latency Communications (URLLC) service offered by 5G. Distributed resources can help manage heavy traffic from these applications, but complicate traffic routing under URLLCfs strict delay requirements. In this paper, we introduce the V2N Computation Offloading and CPU Activation (V2N-COCA) problem, aiming at the monetary/energetic cost minimization via computation offloading and edge/cloud CPU activation decisions, under stringent latency constraints. Some challenges are the proven nonmonotonicity of the objective function and the no-existence of closed-formulas for the sojourn time of tasks. We present a provably tight approximation for the latter, and we design BiQui, a provably asymptotically optimal and computationally efficient algorithm for the V2N-COCA problem. We then study dynamic scenarios, introducing the Swap-Prevention problem, to account for changes in the traffic load and minimize the switching on/off of CPUs without incurring into overcosts.We prove the problemfs structural properties and exploit them to design Min-Swap, a provably correct and computationally effective algorithm for the Swap-Prevention Problem. We assess both BiQui and Min-Swap over real-world vehicular traffic traces, performing a sensitivity analysis and a stress-test. Results show that (i) BiQui is nearoptimal and significantly outperforms existing solutions; and (ii) Min-Swap reduces by a ≥90% the CPU swapping incurring into just ≤0.14% extra cost. | 10.1109/TNSM.2025.3605408 |
| Saif Eddine Khelifa, Miloud Bagaa, Oussama Bekkouche, Messaoud Ahmed Ouameur, Adlen Ksentini | Extending WebAssembly for Deep-Learning Inference Across the Cloud Continuum | 2025 | Early Access | Webassembly Runtime Hardware Performance evaluation Biological system modeling Serverless computing Pipelines Computational modeling Training Optimization Cloud Edge Continuum (CECC) WebAssembly (WASM) Cloud Computing Edge Computing Edge DL | Recent advancements in serverless computing and the cloud-edge continuum have increased interest in WebAssembly (WASM). This technology enables portability and interoperability across diverse computing environments while achieving near-native execution speeds. Currently, WASM supports Single Instruction Multiple Data (SIMD), which allows for data-level parallelism that is particularly beneficial for vectorizable operations such as general matrix-matrix multiplication (GEMM) and convolutional layers. However, WASM lacks native integration with specialized hardware accelerators like GPUs, TPUs, and NPUs, as well as the ability to benefit from multi-core processing capabilities, which are critical for efficiently running Deep-Learning (DL) workloads. In contrast, despite these gains, WASM still lacks native support for heterogeneous accelerators such as GPUs, TPUs, and NPUs, as well as full multi-core parallelism capabilities that are critical for meeting the latency and throughput requirements of modern DL inference services. To bridge this gap, WASI-NN was developed, enabling WASM to integrate with external runtimes such as OpenVINO and ONNX Runtime, which leverage hardware acceleration. However, these current integrations often introduce performance overhead on certain devices, restricting their usability across the CECC. To address these challenges, we propose a new integration focusing on TVM as an external runtime for WASI-NN to enhance WASM’s performance and expand support to a broader range of devices. Additionally, we integrate this solution into Knative, a serverless framework, to provide a scalable and flexible platform for DL deployment. Using WASM technology, we evaluate our TVM-based solution through comparative studies. Results on AMD CPUs demonstrate the effectiveness of our approach, achieving 58% overall gain over other WASI-NN integrations (e.g., ONNX Runtime and OpenVINO) for CNN-based models while also achieving optimal performance on different platforms, such as Intel GPUs. These findings highlight the effectiveness of our solution. | 10.1109/TNSM.2025.3606343 |
| Kai Cheng, Weidong Tang, Lintao Tan, Jing Yang, Jia Chen | SLNALog: A log Anomaly Detection Scheme Based on Swift Layer Normalization Attention Mechanism for Next-Generation Power Communication Networks | 2025 | Early Access | Anomaly detection Semantics Smart grids Feature extraction Next generation networking Data models Maintenance Computational modeling Vectors Power system stability Log anomaly detection deep learning binary classification smart grid security | Log anomaly detection is a critical first line of defense for securing next-generation power communication networks against malicious attacks.serves as the initial line of defense for safeguarding the security of the next-generation power communication networks, which can protect it from attackers invasion damage. However, in industrial settingsin the industrial Internet domain, limitedthe scarcity of computational resources on edge devices result in longin devices leads to prolonged inference times for anomaly detection models, hindering the timely detection of anomalous log activities.impeding the prompt identification of unusual activities logged within these devices. To address these challenges, we propose SLNALog, an anomaly detection workflow centered around a Swift Layer Normalization Attention module.the aforementioned issues, the SLNALog anomaly detection workflow has been proposed. Its core comprises a Swift Layer Normalization Attention module. This module leveragesis based on linear attention to optimizeand optimizes the key-value interactions found in traditional attention mechanisms, thereby reducing the computational complexity of the detection process. This optimization reduces the complexity of log anomaly detection. As a result, the model’s receptive field for log data is expanded, and the efficiency of log anomaly detection is improved. ExperimentalThe experimental results on the HDFS and BGL datasets demonstrate the superiority of our approach.indicate that it achieves higher accuracy. SLNALog achieves higher accuracy, with F1-scores increasing by 0.08 and 0.04, respectively, while reducing detection time by 5.7% and 28.3%. The model provides an effective solution to enhance the cyber security of smart grids. Furthermore, the workflow incorporates an LLM-based log template analysis module and an Adapter-based model tuning module to enhance the model’s generalization in real-world scenarios. The proposed model provides an effective solution for enhancing the cybersecurity of smart grids. | 10.1109/TNSM.2025.3605764 |
| Runze Wu, Kai Wang, Haobo Guo, Yuxin Liu, Wenting Wang, Jing Liu | Joint Optimization Algorithm for Multi-Dimensional Wireless Communication Resources Adapt to Deterministic Monitoring of Power Distribution Areas | 2025 | Early Access | Ultra reliable low latency communication Resource management Wireless communication Monitoring Quality of service Optimization Power distribution Power systems Data communication Radio spectrum management progressive superposition QoS resources allocation wireless communication new power system | With a growing penetration of distributed renewable energy, it is becoming increasingly evident that the monitoring service of power distribution area is characterized by multiple concurrency and time delay sensitivity. Correspondingly, massive amounts of differentiated data need to be collected, which dramatically increases the demand for technologies like ultra-reliable low-latency communication (URLLC) and enhanced mobile bandwidth (eMBB). However, due to limited wireless communication resource in power distribution areas, the major difficulty of improving monitoring certainty under the delay perspective lies in efficient and reasonable joint allocation of the multi-dimensional resources. To this end, a joint optimization algorithm for multi-dimensional wireless communication resources adapt to deterministic monitoring of power distribution areas is proposed. Firstly, the joint optimization problem for spectrum resource blocks (RBs) and power scheduling is constructed, and then it is decoupled in layers consisted of eMBB-RB scheduling layer, URLLC-RB scheduling layer and URLLC-power allocation layer and solved by the greedy strategy, improved plant growth simulation algorithm and rigorous theory analysis combined with block coordinate descent. The simulation results validate that the proposed algorithm swiftly and rationally allocates multi-dimensional wireless communication resources to massive and concurrent services, which satisfies the diverse QoS demands and improves the monitoring certainty of the power distribution areas. | 10.1109/TNSM.2025.3605601 |
| Bo Mi, Hangcheng Zou, Darong Huang | FedPP: Privacy-Enhanced Federated Learning for Parameter Aggregation in Heterogeneous Intelligent Connected Vehicles | 2025 | Early Access | Federated learning Training Data models Computational modeling Accuracy Homomorphic encryption Privacy Differential privacy Autonomous vehicles Reliability ICVs Federated Learning Heterogeneity Privacy Preserving Poisoning Attack Resistant | With the popularization of intelligent connected vehicles (ICVs), traffic information sources are becoming ubiquitous and diverse. Given the inherent conflict between data value extraction and privacy protection, federated learning (FL) has emerged as a powerful tool for developing application models with certain generalization capability. Although FL ensures that data remains local, the parameters used for aggregation are still vulnerable to attacks, such as reverse engineering or membership inference. Methods based on homomorphic encryption or differential privacy can alleviate this issue to some extent; however, they also lead to a reduction in training performance. Furthermore, since the data collected by ICVs generally exhibit non-independent and identically distributed (non-IID) characteristics, ensuring model reliability becomes quite challenging. This paper presents a private-parameter-based federated learning method, FedPP, which integrates a Gaussian mechanism with multi-key homomorphic encryption to prevent parameter leakage while eliminating noise disturbance. By sorting and selecting the parameters to be aggregated, this approach not only demonstrates improved generalization capability under heterogeneous conditions but also effectively resists poisoning attacks. To evaluate the model, we constructed two non-IID traffic datasets using the Dirichlet distribution, which comprises a traffic sign dataset and a vehicle image dataset generated through the DALL-E model. Theoretical analysis and experimental results demonstrate that FedPP not only meets provable security under collaborative attacks but also exhibits higher model accuracy in heterogeneous vehicular network environments. | 10.1109/TNSM.2025.3605336 |
| Lu Cao, Lin Yao, Weizhe Zhang, Yao Wang | HeavyFinder: A Lightweight Network Measurement Framework for Detecting High-Frequency Elements in Skewed Data Streams | 2025 | Early Access | Accuracy Memory management Resource management Frequency measurement Real-time systems Arrays Radio spectrum management Optimization Hash functions Data mining Network measurements highfrequency elements skewed data streams sketch heavy entries | Skewed data streams are characterized by uneven distributions in which a small fraction of elements occur with much higher frequency than others. The detection of these high-frequency elements presents significant practical challenges, particularly under stringent memory constraints, as existing detection techniques have typically relied on predefined thresholds that require significant memory usage. However, this approach is highly inefficient since not all elements require equal storage space. To address these limitations, we introduce HeavyFinder (HF), a novel lightweight network measurement architecture designed to detect high-frequency elements in skewed data. HF employs a threshold-free update strategy that enables dynamic adaptation to variable data, thereby providing greater flexibility for tracking high-frequency elements without requiring fixed thresholds. Furthermore, an included memory-light strategy enables high accuracy for non-uniform distributions, even with limited memory allocation. Experimental results showed that HF significantly improved performance in four query tasks, producing an accuracy of 99.81% when identifying the top-k elements. The average absolute error (AAE) was also reduced to 10-4 using only 100KB of memory, which was significantly lower than that of conventional methods. | 10.1109/TNSM.2025.3604523 |
| Leyla Sadighi, Stefan Karlsson, Carlos Natalino, Marija Furdek | ML-Based State of Polarization Analysis to Detect Emerging Threats to Optical Fiber Security | 2025 | Early Access | Optical fiber networks Optical fiber cables Optical fiber polarization Optical polarization Optical transmitters Vibrations Optical receivers Eavesdropping Anomaly detection Monitoring State of Polarization (SOP) variations Machine Learning (ML) anomaly detection Semi-Supervised Learning (SSL) Unsupervised Learning (USL) One-Class Support Vector Machine (OCSVM) Density-Based Spatial Clustering of Applications with Noise (DBSCAN) | As the foundation of global communication networks, optical fibers are vulnerable to various disruptive events, including mechanical damage, such as cuts, and malicious physical layer breaches, such as eavesdropping via fiber bending. Traditional monitoring methods often fail to identify subtle or novel anomalies, stimulating the proliferation of ML techniques for detection of threats before they cause significant harm. In this paper, we evaluate the performance of SSL and USL approaches for detecting various abnormal events, such as fiber bending and vibrations, by analyzing polarization signatures with minimal reliance on labeled data. We experimentally collect thirteen polarization signatures on three different types of fiber cable and process them using OCSVM as an SSL, and DBSCAN as a USL algorithm for anomaly detection. We introduce tailored evaluation metrics designed to guide hyper-parameter tuning and capture generalization over different anomaly types, detection consistency, and robustness to false positives, enabling practical deployment of OCSVM and DBSCAN in optical fiber security. Our findings demonstrate DBSCAN as a strong contender to detect previously unseen threats in scenarios where labeled data are not available, despite some variability in performance between different scenarios, with F1 score values between 0.615 and 0.995. In contrast, OCSVM, trained on normal operating conditions, maintains high F1 scores of 0.98 to 0.998, demonstrating accurate detection of complex anomalies in optical networks. | 10.1109/TNSM.2025.3607022 |
| Cheng Long, Haoming Zhang, Zixiao Wang, Yiming Zheng, Zonghui Li | FastScheduler: Polynomial-Time Scheduling for Time-Triggered Flows in TSN | 2025 | Early Access | Job shop scheduling Network topology Dynamic scheduling Real-time systems Heuristic algorithms Delays Ethernet Schedules Deep reinforcement learning Training Time-Sensitive Network Online Scheduling Algorithm Industrial Control | Time-Sensitive Networking (TSN) has emerged as a promising network paradigm for time-critical applications, such as industrial control, where flow scheduling is crucial to ensure low latency and determinism. As production flexibility demands increase, network topology and flow requirements may change, necessitating more efficient TSN scheduling algorithms to guarantee real-time and deterministic data transmission. In this work, we present FastScheduler, a polynomial-time, deterministic TSN scheduler, which can schedule thousands of Time-Triggered (TT) flows within arbitrary network topologies. The key innovations of FastScheduler include an Equivalent Reduction Technique to simplify the generic model while preserving the feasible scheduling space, a Deterministic Heuristic Strategy to ensure a consistent and reproducible scheduling process, and a Polynomial-Time Scheduling Algorithm to perform dynamic and real-time scheduling of periodic TT flows. Extensive experiments on various topologies show that FastScheduler can effectively simplify the model, reducing variables/constraints by 35%/62%, and schedule 1,000 TT flows in subsecond time. Furthermore, it runs 2/3 orders of magnitude faster and improves the schedulability by 12%/20% compared to heuristic/deep reinforcement learning-based methods. FastScheduler is well-suited for the dynamic requirements of industrial control networks. | 10.1109/TNSM.2025.3603844 |
| Wenjun Fan, Na Fan, Junhui Zhang, Jia Liu, Yifan Dai | Securing VNDN With Multi-Indicator Intrusion Detection Approach Against the IFA Threat | 2025 | Early Access | Monitoring Prevention and mitigation Electronic mail Threat modeling Telecommunication traffic Fans Blocklists Security Road side unit Intrusion detection Interest Flooding Attack Named Data Network Network Traffic Monitoring Denial of Service Road Side Unit | On vehicular named data network (VNDN), Interest Flooding Attack (IFA) can exhaust the computing resources by sending a large number of malicious Interest packets, which leads to the failure of satisfying the legitimate requests and seriously hazards the operation of Internet of Vehicles (IoV). To solve this problem, this paper proposes a distributed network traffic monitoring-enabled multi-indicator detection and prevention approach for VNDN to detect and resist the IFA attacks. In order for facilitating this approach, a distributed network traffic monitoring layer based on road side unit (RSU) is constructed. With such a monitoring layer, a multi-indicator detection approach is designed, which consists of three indicators: information entropy, self-similarity, and singularity, whereby the thresholds are tweaked by the real-time density of traffic flow. Apart from the detection, a blacklisting based prevention approach is realized to mitigate the attack impact.We validate the proposed approach via prototyping it on our VNDN experimental platform using realistic parameters setting and leveraging the original NDN packet structure to corroborate the usage of the required Source ID for identifying the source of the Interest packet, which consolidates the practicability of the approach. The experimental results show that our multi-indicator detection approach has a greatly higher detection performance than those of using indicators individually, and the blacklisting-based prevention can effectively mitigate the attack impact as well. | 10.1109/TNSM.2025.3603630 |
| Huaide Liu, Fanqin Zhou, Yikun Zhao, Lei Feng, Zhixiang Yang, Yijing Lin, Wenjing Li | Autonomous Deployment of Aerial Base Station without Network-Side Assistance in Emergency Scenarios Based on Multi-Agent Deep Reinforcement Learning | 2025 | Early Access | Heuristic algorithms Disasters Optimization Estimation Wireless communication Collaboration Base stations Autonomous aerial vehicles Adaptation models Sensors Aerial base station deep reinforcement learning autonomous deployment emergency scenarios multi-agent systems | Aerial base station (AeBS) is a promising technology for providing wireless coverage to ground user equipment. Traditional methods of optimizing AeBS networks often rely on pre-known distribution models of ground user equipment. However, in practical scenarios such as natural disasters or temporary large-scale public events, the distribution of user clusters is often unknown, posing challenges for the deployment and application of AeBS. To adapt to complex and unknown user environments, this paper studies a method of estimating information from local to global and proposes a multi-agent AeBSs autonomous deployment algorithm based on deep reinforcement learning (DRL). This method attempts to dynamically deploy AeBS to autonomously identify hotspots by sensing user equipment signals without network-side assistance, providing a more comprehensive and intelligent solution for AeBS deployment. Simulation results indicate that our method effectively guides the autonomous deployment of AeBS in emergency scenarios, addressing the challenge of the lack of network-side assistance. | 10.1109/TNSM.2025.3603875 |
| Menna Helmy, Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad | Slicing for AI: An Online Learning Framework for Network Slicing Supporting AI Services | 2025 | Early Access | Artificial intelligence Training Resource management Network slicing Computational modeling Optimization Quality of service Ultra reliable low latency communication 6G mobile communication Heuristic algorithms Network slicing online learning resource allocation 6G networks optimization | The forthcoming 6G networks will embrace a new realm of AI-driven services that requires innovative network slicing strategies, namely slicing for AI, which involves the creation of customized network slices to meet Quality of Service (QoS) requirements of diverse AI services. This poses challenges due to time-varying dynamics of users’ behavior and mobile networks. Thus, this paper proposes an online learning framework to determine the allocation of computational and communication resources to AI services, to optimize their accuracy as one of their unique key performance indicators (KPIs), while abiding by resources, learning latency, and cost constraints. We define a problem of optimizing the total accuracy while balancing conflicting KPIs, prove its NP-hardness, and propose an online learning framework for solving it in dynamic environments. We present a basic online solution and two variations employing a pre-learning elimination method for reducing the decision space to expedite the learning. Furthermore, we propose a biased decision space subset selection by incorporating prior knowledge to enhance the learning speed without compromising performance and present two alternatives of handling the selected subset. Our results depict the efficiency of the proposed solutions in converging to the optimal decisions, while reducing decision space and improving time complexity. Additionally, our solution outperforms State-of-the-Art techniques in adapting to diverse environmental dynamics and excels under varying levels of resource availability. | 10.1109/TNSM.2025.3603391 |
| Erhe Yang, Zhiwen Yu, Yao Zhang, Helei Cui, Zhaoxiang Huang, Hui Wang, Jiaju Ren, Bin Guo | Joint Semantic Extraction and Resource Optimization in Communication-Efficient UAV Crowd Sensing | 2025 | Early Access | Sensors Autonomous aerial vehicles Optimization Semantic communication Data mining Feature extraction Resource management Accuracy Data models Data communication UAV crowd sensing semantic communication multi-scale dilated fusion attention reinforcement learning | With the integration of IoT and 5G technologies, UAV crowd sensing has emerged as a promising solution to overcome the limitations of traditional Mobile Crowd Sensing (MCS) in terms of sensing coverage. As a result, UAV crowd sensing has been widely adopted across various domains. However, existing UAV crowd sensing methods often overlook the semantic information within sensing data, leading to low transmission efficiency. To address the challenges of semantic extraction and transmission optimization in UAV crowd sensing, this paper decomposes the problem into two sub-problems: semantic feature extraction and task-oriented sensing data transmission optimization. To tackle the semantic feature extraction problem, we propose a semantic communication module based on Multi-Scale Dilated Fusion Attention (MDFA), which aims to balance data compression, classification accuracy, and feature reconstruction under noisy channel conditions. For transmission optimization, we develop a reinforcement learning-based joint optimization strategy that effectively manages UAV mobility, bandwidth allocation, and semantic compression, thereby enhancing transmission efficiency and task performance. Extensive experiments conducted on real-world datasets and simulated environments demonstrate the effectiveness of the proposed method, showing significant improvements in communication efficiency and sensing performance under various conditions. | 10.1109/TNSM.2025.3603194 |
| José Santos, Bibin V. Ninan, Bruno Volckaert, Filip De Turck, Mays Al-Naday | A Comprehensive Benchmark of Flannel CNI in SDN/non-SDN Enabled Cloud-Native Environments | 2025 | Early Access | Containers Benchmark testing IP networks Microservice architectures Encapsulation Complexity theory Software defined networking Packet loss Overlay networks Network interfaces Containers Container Network Interfaces Network Function Virtualization Benchmark Cloud-Native Software-Defined Networking | The emergence of cloud computing has driven advancements in software virtualization, particularly microservice containerization. This in turn led to the development of Container Network Interfaces (CNIs) such as Flannel to connect microservices over a network. Despite their objective to provide connectivity, CNIs have not been adequately benchmarked when containers are connected over an external network. This creates uncertainty about the operation reliability of CNIs in distributed edge-cloud ecosystems. Given the multitude of available CNIs and the complexity of comparing different ones, this paper focuses on the widely adopted CNI, Flannel. It proposes the design of novel benchmarks of Flannel across external networks, Software Defined Networking (SDN)-based and non-SDN, characterizing two of the key backend types of Flannel: User Datagram Protocol (UDP) and Virtual Extensible LAN (VXLAN). Unlike existing benchmarks, this study analysis the overhead introduced by the external network and the impact of network disruptions. The paper outlines the systematic approach to benchmarking a set of Key Performance Indicators (KPIs), including: speed, latency and throughput. A variety of network disruptions have been induced to analyse their impact on these KPIs, including: delay, packet loss, and packet corruption. The results show that VXLAN consistently outperforms UDP, offering superior bandwidth with efficient resource consumption, making it more suitable for production environments. In contrast, the UDP backend is suitable for real-time video streaming applications due to its higher data rate and lower jitter, though it requires higher resource utilization. Moreover, the results show less variation in KPIs over SDN, compared to non-SDN. The benchmark data are made publicly available in an open-source repository, enabling researchers to replicate the experiments, and potentially extend the study to other CNIs. This work contributes to the network management domain by providing an extensive benchmark study on container networking highlighting the main advantages and disadvantages of current technologies. | 10.1109/TNSM.2025.3602607 |
| Dániel Unyi, Ernő Rigó, Bálint Gyires-Tóth, Róbert Lovas | Explainable GNN-Based Approach to Fault Forecasting in Cloud Service Debugging | 2025 | Early Access | Debugging Microservice architectures Cloud computing Reliability Observability Computer architecture Graph neural networks Monitoring Probabilistic logic Fault diagnosis Cloud computing Software debugging Microservice architectures Deep learning Graph neural networks Explainable AI Fault prediction | Debugging cloud services is increasingly challenging due to their distributed, dynamic, and scalable nature. Traditional methods struggle to handle large state spaces and the complex interactions between microservices, making it difficult to diagnose failures and identify critical components. This paper presents a Graph Neural Network (GNN)-based approach that enhances cloud service debugging by predicting system-level fault probabilities and providing interpretable insights into failure propagation. Our method models microservice interactions as graphs, where failures propagate probabilistically. Using Markov Decision Processes (MDPs), we simulate failure behaviors, capturing the probabilistic dependencies that influence system reliability. The trained GNN not only predicts fault probabilities but also identifies the most failure-prone microservices and explains their impact. We evaluate our approach on various service mesh structures, including feature-enriched, tree-structured, and general directed acyclic graph (DAG) architectures. Results indicate that our method is effective in the operational phase of cloud services, enabling proactive debugging and targeted optimization. This work represents a step toward more interpretable, reliable, and maintainable cloud infrastructures. | 10.1109/TNSM.2025.3602223 |
| Ahan Kak, Van-Quan Pham, Huu-Trung Thieu, Nakjung Choi | HexRAN: A Programmable Approach to Open RAN Base Station System Design | 2025 | Early Access | Open RAN Base stations Protocols 3GPP Computer architecture Telemetry Cellular networks Network slicing Wireless networks Prototypes Network Architecture Cellular Systems Radio Access Networks O-RAN Network Slicing Network Programmability | In recent years, the radio access network (RAN) domain has seen significant changes with increased virtualization and softwarization, driven by the Open RAN (O-RAN) movement. However, the fundamental building block of the cellular network, i.e., the base station, remains unchanged and ill-equipped to handle this architectural evolution. In particular, there exists a general lack of programmability and composability along with a protocol stack that grapples with the intricacies of the 3GPP and O-RAN specifications. Recognizing the need for an “O-RAN-native” approach to base station design, this paper introduces HexRAN– a novel base station architecture characterized by key features relating to RAN disaggregation and composability, 3GPP and O-RAN protocol integration and programmability, robust controller interactions, and customizable RAN slicing. Furthermore, the paper also includes a concrete systems-level prototype and comprehensive experimental evaluation of HexRAN on an over-the-air testbed. The results demonstrate that HexRAN uses only 8% more computing resources compared to the baseline, while managing twice the user plane traffic, delivering control plane processing latency of under 120μs, and achieving 100% processing reliability. This underscores the scalability and performance advantages of the proposed architecture. | 10.1109/TNSM.2025.3600587 |
| Anna Volkova, Julian Schmidhuber, Hermann de Meer, Jacek Rak | Design of Weather-Resilient Satellite-Terrestrial ICT Networks for Power Grid Communications | 2025 | Early Access | Power grids Satellites Meteorology Low earth orbit satellites Routing Space-air-ground integrated networks Power system dynamics Network topology Delays Topology Resilience satellite-terrestrial network power grid communication LEO satellite network | Hybrid satellite-terrestrial communication networks can enhance the resilience of power grid communications. Recent advancements in low-Earth orbit (LEO) satellite technologies have improved their ability to meet the communication requirements of power grid applications. However, the dynamic nature of LEO networks necessitates frequent routing updates, which can potentially disrupt the transmission of critical power grid monitoring and control data. Additionally, extreme weather events, such as severe rainfall, can impair both terrestrial and satellite communication links, posing risks to the operation of the power grid. This paper presents a two-phase methodology for reducing the need for frequent routing updates by identifying stable low-latency configurations of hybrid satellite-terrestrial communication networks for power grid applications. In the proactive phase, the deterministic dynamics of LEO satellite constellations are considered to generate a sequence of stable network configurations using fine-grained temporal snapshots and graph aggregation. The adaptive phase incorporates a dynamic regional weather model to update link capacities. A minimum-delay multi-commodity flow problem is solved to determine the best traffic distribution under given conditions. Simulation results show that hybrid networks with stable configurations can reduce network reconfiguration frequency by 92%. Compared to terrestrial-only networks, the hybrid network improves end-to-end delay by 65.5% and maintains approximately 80% connectivity even under extreme rainfall conditions. | 10.1109/TNSM.2025.3608855 |