Last updated: 2026-01-19 05:01 UTC
All documents
Number of pages: 154
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| M. Gharbaoui, F. Sciarrone, M. Fontana, P. Castoldi, B. Martini | Assurance and Conflict Detection in Intent-Based Networking: A Comprehensive Survey and Insights on Standards and Open-Source Tools | 2026 | Early Access | Surveys Translation Bandwidth Real-time systems Runtime Robustness Systematic literature review Monitoring Heuristic algorithms Engines IBN Intent Assurance Conflict detection Standards Open-source IBN | Intent-Based Networking (IBN) enables operators to specify high-level outcomes while the system translates these intents into concrete policies and configurations. As IBN deployments grow in scale, heterogeneity and dynamicity, ensuring continuous alignment between network behavior and user objectives becomes both essential and increasingly difficult. This paper provides a technical survey of assurance and conflict detection techniques in IBN, with the goal of improving reliability, robustness, and policy compliance. We first position our survey with respect to existing work. We then review current assurance mechanisms, including the use of AI, machine learning, and real-time monitoring for validating intent fulfillment. We also examine conflict detection methods across the intent lifecycle, from capture to implementation. In addition, we outline relevant standardization efforts and open-source tools that support IBN adoption. Finally, we discuss key challenges, such as AI/ML integration, generalization, and scalability, and present a roadmap for future research aimed at strengthening robustness of IBN frameworks. | 10.1109/TNSM.2026.3651896 |
| Marco Polverini, Andrés García-López, Juan Luis Herrera, Santiago García-Gil, Francesco G. Lavacca, Antonio Cianfrani, Jaime Galán-Jiménez | Avoiding SDN Application Conflicts With Digital Twins: Design, Models and Proof of Concept | 2026 | Early Access | Digital twins Analytical models Routing Delays Data models Reliability Switches Software defined networking Routing protocols Reviews Network Digital Twin SDN Data Plane SLA | Software-Defined Networking (SDN) enables flexible and programmable control over network behavior through the deployment of multiple control applications. However, when these applications operate simultaneously, each pursuing different and potentially conflicting objectives, unexpected interactions may arise, leading to policy violations, performance degradation, or inefficient resource usage. This paper presents a Digital Twin (DT)-based framework for the early detection of such application-level conflicts. The proposed framework is lightweight, modular, and designed to be seamlessly integrated into real SDN controllers. It includes multiple DT models capturing different network aspects, including end-to-end delay, link congestion, reliability, and carbon emissions. A case study in a smart factory scenario demonstrates the framework’s ability to identify conflicts arising from coexisting applications with heterogeneous goals. The solution is validated through both simulation and proof-of-concept implementation tested in an emulated environment using Mininet. The performance evaluation shows that three out of four DT models achieve a precision above 90%, while the minimum recall across all models exceeds 84%. Moreover, the proof of concept confirms that what-if analyses can be executed in a few milliseconds, enabling timely and proactive conflict detection. These results demonstrate that the framework can accurately detect conflicts and deliver feedback fast enough to support timely network adaptation. | 10.1109/TNSM.2026.3652800 |
| Jian Ye, Lisi Mo, Gaolei Fei, Yunpeng Zhou, Ming Xian, Xuemeng Zhai, Guangmin Hu, Ming Liang | TopoKG: Infer Internet AS-Level Topology From Global Perspective | 2026 | Early Access | Business Topology Routing Internet Knowledge graphs Accuracy Network topology Probabilistic logic Inference algorithms Border Gateway Protocol AS-level topology business relationship hierarchical structure knowledge graph global perspective | Internet Autonomous System (AS) level topology includes AS topology structure and AS business relationships, describes the essence of Internet inter-domain routing, and is the basis for Internet operation and management research. Although the latest topology inference methods have made significant progress, those relying solely on local information struggle to eliminate inference errors caused by observation bias and data noise due to their lack of a global perspective. In contrast, we not only leverage local AS link features but also re-examine the hierarchical structure of Internet AS-level topology, proposing a novel inference method called topoKG. TopoKG introduces a knowledge graph to represent the relationships between different elements on a global scale and the business routing strategies of ASes at various tiers, which effectively reduces inference errors resulting from observation bias and data noise by incorporating a global perspective. First, we construct an Internet AS-level topology knowledge graph to represent relevant data, enabling us to better leverage the global perspective and uncover the complex relationships among multiple elements. Next, we employ knowledge graph meta paths to measure the similarity of AS business routing strategies and introduce this global perspective constraint to infer the AS business relationships and hierarchical structure iteratively. Additionally, we embed the entire knowledge graph upon completing the iteration and conduct knowledge inference to derive AS business relationships. This approach captures global features and more intricate relational patterns within the knowledge graph, further enhancing the accuracy of AS-level topology inference. Compared to the state-of-the-art methods, our approach achieves more accurate AS-level topology inference, reducing the average inference error across various AS link types by up to 1.2 to 4.4 times. | 10.1109/TNSM.2026.3652956 |
| Jack Wilkie, Hanan Hindy, Craig Michie, Christos Tachtatzis, James Irvine, Robert Atkinson | A Novel Contrastive Loss for Zero-Day Network Intrusion Detection | 2026 | Early Access | Contrastive learning Anomaly detection Training Autoencoders Training data Detectors Data models Vectors Telecommunication traffic Network intrusion detection Internet of Things Network Intrusion Detection Machine Learning Contrastive Learning | Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e. other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches.The implementation and experiments are open-sourced and available at: https://github.com/jackwilkie/CLOSR | 10.1109/TNSM.2026.3652529 |
| Shagufta Henna, Upaka Rathnayake | Hypergraph Representation Learning-Based xApp for Traffic Steering in 6G O-RAN Closed-Loop Control | 2026 | Early Access | Open RAN Resource management Ultra reliable low latency communication Throughput Heuristic algorithms Computer architecture Accuracy 6G mobile communication Seals Real-time systems Open Radio Access Network (O-RAN) Intelligent Traffic Steering Link Prediction for Traffic Management | This paper addresses the challenges in resource allocation within disaggregated Radio Access Networks (RAN), particularly when dealing with Ultra-Reliable Low-Latency Communications (uRLLC), enhanced Mobile Broadband (eMBB), and Massive Machine-Type Communications (mMTC). Traditional traffic steering methods often overlook individual user demands and dynamic network conditions, while multi-connectivity further complicates resource management. To improve traffic steering, we introduce Tri-GNN-Sketch, a novel graph-based deep learning approach employing Tri-subgraph sampling to enhance link prediction in Open RAN (O-RAN) environments. Link prediction refers to accurately forecasting optimal connections between users and network resources using current and historical measurements. Tri-GNN-Sketch is trained on real-world 4G/5G RAN monitoring data. The model demonstrates robust performance across multiple metrics, including precision, recall, F1 score, and ROC-AUC, effectively modeling interfering nodes for accurate traffic steering. We further propose Tri-HyperGNN-Sketch, which extends the approach to hypergraph modeling, capturing higher-order multi-node relationships. Using link-level simulations based on Channel Quality Indicator (CQI)-to-modulation mappings and LTE transport block size specifications, we evaluate throughput and packet delay for Tri-HyperGNN-Sketch. Tri-HyperGNN-Sketch achieves an exceptional link prediction accuracy of 99.99% and improved network-level performance, including higher effective throughput and lower packet delay compared to Tri-GNN-Sketch (95.1%) and other hypergraph-based models such as HyperSAGE (91.6%) and HyperGCN (92.31%) for traffic steering in complex O-RAN deployments. | 10.1109/TNSM.2026.3654534 |
| Apurba Adhikary, Avi Deb Raha, Yu Qiao, Md. Shirajum Munir, Mrityunjoy Gain, Zhu Han, Choong Seon Hong | Age of Sensing Empowered Holographic ISAC Framework for NextG Wireless Networks: A VAE and DRL Approach | 2026 | Early Access | Array signal processing Resource management Integrated sensing and communication Wireless networks Phased arrays Hardware Arrays Real-time systems Metamaterials 6G mobile communication Integrated sensing and communication age of sensing holographic MIMO deep reinforcement learning artificial intelligence framework | This paper proposes an AI framework that leverages integrated sensing and communication (ISAC), aided by the age of sensing (AoS) to ensure the timely location updates of the users for a holographic MIMO (HMIMO)-assisted base station (BS)-enabled wireless network. The AI-driven framework aims to achieve optimized power allocation for efficient beamforming by activating the minimal number of grids from the HMIMO BS for serving the users. An optimization problem is formulated to maximize the sensing utility function, aiming to maximize the communication signal-to-interference-plus-noise ratio (SINRc) of the received signals and beam-pattern gains to improve the sensing SINR of reflected echo signals, which in turn maximizes the achievable rate of users. A novel AI-driven framework is presented to tackle the formulated NP-hard problem that divides it into two problems: a sensing problem and a power allocation problem. The sensing problem is solved by employing a variational autoencoder (VAE)-based mechanism that obtains the sensing information leveraging AoS, which is used for the location update. Subsequently, a deep deterministic policy gradient-based deep reinforcement learning scheme is devised to allocate the desired power by activating the required grids based on the sensing information achieved with the VAE-based mechanism. Simulation results demonstrate the superior performance of the proposed AI framework compared to advantage actor-critic and deep Q-network-based methods, achieving a cumulative average SINRc improvement of 8.5 dB and 10.27 dB, and a cumulative average achievable rate improvement of 21.59 bps/Hz and 4.22 bps/Hz, respectively. Therefore, our proposed AI-driven framework guarantees efficient power allocation for holographic beamforming through ISAC schemes leveraging AoS. | 10.1109/TNSM.2026.3654889 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Haoran Hu, Huazhi Lun, Ya Wang, Zhifeng Deng, Jiahao Li, Yuexiang Cao, Ying Liu, Heng Zhang, Jie Tang, Huicun Yu, Jiahua Wei, Xingyu Wang, Lei Shi | Effective Resource Scheduling Design for Concurrent Competing Requests in Quantum Networks | 2026 | Early Access | Purification Quantum networks Quantum entanglement Throughput Damping Scheduling Routing Resource management Qubit Noise Quantum networks resource scheduling concurrent competing requests entanglement fidelity | Quantum networks, as a pivotal platform to support numerous quantum applications, have the potential to far exceed traditional communication networks. Establishing end-to-end entanglement connections with guaranteed fidelity is a key prerequisite for realizing the functionality of quantum networks. Entanglement purification techniques are commonly used in the entanglement distribution process to provide end-to-end entanglement connections that meet the fidelity requirements. Since the purification operation sacrifices a certain amount of entanglement resources, it is critical and challenging to efficiently utilize the scarce entanglement resources in quantum networks with concurrent competing requests. To address this problem, we propose a novel demand-oriented resource scheduling (DRS) algorithm. Considering the overall network demand, DRS introduces a congestion factor to evaluate the resource demand of each link, and performs purification operations sequentially based on the congestion level of the links, thus avoiding the excessive consumption of entanglement resources of bottleneck links. Extensive simulation results show that the DRS algorithm can achieve higher network throughput with similar resource conversion rates compared to traditional resource allocation schemes. Our work provides a new scheme for the resource scheduling problem under concurrent competing requests, which can promote the further development of existing entanglement routing techniques. | 10.1109/TNSM.2026.3651862 |
| Yeryeong Cho, Sungwon Yi, Soohyun Park | Joint Multi-Agent Reinforcement Learning and Message-Passing for Resilient Multi-UAV Networks | 2026 | Early Access | Servers Heuristic algorithms Autonomous aerial vehicles Training Surveillance Reliability Training data Reinforcement learning Resource management Resilience Multi-Agent System (MAS) Reinforcement Learning (RL) Communication Graph Message Passing Resilient Communication Network Unmanned Aerial Vehicle (UAV) UAVs Networks | This paper introduces a novel resilient algorithm designed for distributed unmanned aerial vehicles (UAVs) in dynamic and unreliable network environments. Initially, the UAVs should be trained via multi-agent reinforcement learning (MARL) for autonomous mission-critical operations and are fundamentally grounded by centralized training and decentralized execution (CTDE) using a centralized MARL server. In this situation, it is crucial to consider the case where several UAVs cannot receive CTDE-based MARL learning parameters for resilient operations in unreliable network conditions. To tackle this issue, a communication graph is used where its edges are established when two UAVs/nodes are communicable. Then, the edge-connected UAVs can share their training data if one of the UAVs cannot be connected to the CTDE-based MARL server under unreliable network conditions. Additionally, the edge cost considers power efficiency. Based on this given communication graph, message-passing is used for electing the UAVs that can provide their MARL learning parameters to their edge-connected peers. Lastly, performance evaluations demonstrate the superiority of our proposed algorithm in terms of power efficiency and resilient UAV task management, outperforming existing benchmark algorithms. | 10.1109/TNSM.2025.3650697 |
| Yilu Chen, Ye Wang, Ruonan Li, Yujia Xiao, Lichen Liu, Jinlong Li, Yan Jia, Zhaoquan Gu | TrafficAudio: Audio Representation for Lightweight Encrypted Traffic Classification in IoT | 2026 | Early Access | Feature extraction Cryptography Telecommunication traffic Accuracy Malware Vectors Spatiotemporal phenomena Security Intrusion detection Computational efficiency Encrypted traffic classification Malicious traffic detection Mel-frequency cepstral coefficients Traffic representation | Encrypted traffic classification has become a crucial task for network management and security with the widespread adoption of encrypted protocols across the Internet and the Internet of Things. However, existing methods often rely on discrete representations and complex models, which leads to incomplete feature extraction, limited fine-grained classification accuracy, and high computational costs. To this end, we propose TrafficAudio, a novel encrypted traffic classification method based on audio representation. TrafficAudio comprises three modules: audio representation generation (ARG), audio feature extraction (AFE), and spatiotemporal traffic classification (STC). Specifically, the ARG module first represents raw network traffic as audio to preserve temporal continuity of traffic. Then, the audio is processed by the AFE module to compute low-dimensional Mel-frequency cepstral coefficients (MFCC), encoding both temporal and spectral characteristics. Finally, spatiotemporal features are extracted from MFCC through a parallel architecture of one-dimensional convolutional neural network and bidirectional gated recurrent unit layers, enabling fine-grained traffic classification. Experiments on five public datasets across six classification tasks demonstrate that TrafficAudio consistently outperforms ten state-of-the-art baselines, achieving accuracies of 99.74%, 98.40%, 99.76%, 99.25%, 99.77%, and 99.74%. Furthermore, TrafficAudio significantly reduces computational complexity, achieving reductions of 86.88% in floating-point operations and 43.15% of model parameters over the best-performing baseline. | 10.1109/TNSM.2026.3651599 |
| Haiyuan Li, Yuelin Liu, Hari Madhukumar, Amin Emami, Xueqing Zhou, Yulei Wu, Xenofon Vasilakos, Shuangyi Yan, Dimitra Simeonidou | Incremental DRL-Based Resource Management for Dynamic Network Slicing in an Urban-Wide Testbed | 2026 | Vol. 23, Issue | Resource management Energy consumption Servers Network slicing Heuristic algorithms Load modeling 5G mobile communication Training Dynamic scheduling Quality of service Multi-access edge computing network slicing incremental learning MADDPG testbed deployment | Multi-access edge computing provides localized resources within mobile networks to address the requirements of emerging latency-sensitive and computing-intensive applications. At the edge, dynamic requests necessitate sophisticated resource management for adaptive network slicing. This involves optimizing resource allocations, scaling functions, and load balancing to utilize only essential resources under constrained network scenarios. However, existing solutions largely assume static slice counts, ignoring the re-optimization overhead associated with management algorithms when slices fluctuate. Moreover, many approaches rely on simplified energy models that overlook intertemporal resource scheduling and are predominantly evaluated through simulations, neglecting critical practical considerations. This paper presents an incremental cooperative Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm for resource management in dynamic edge slicing. The proposed approach optimizes long-term slicing benefits by reducing delay and energy consumption while minimizing retraining overhead in response to slice variations. Furthermore, we implement an urban-wide edge computing testbed based on OpenStack and Kubernetes to validate the algorithm’s performance. Experimental results demonstrate that our incremental MADDPG method outperforms benchmark strategies in aggregated slicing utility and reduces training energy consumption by up to 50% compared to the re-optimization approach. | 10.1109/TNSM.2025.3633927 |
| Giovanni Pettorru, Marco Martalò | A Persistent and Secure Publish-Subscriber Architecture for Low-Latency IoT Communications | 2026 | Vol. 23, Issue | Internet of Things Protocols Low latency communication Security HTTP Servers Telemetry TCP Standards Logic gates Internet of Things (IoT) security low latency computational complexity QUIC WebSocket (WS) Message Queuing Telemetry Transport (MQTT) | Secure and low-latency data exchange is gaining more and more attention in Internet of Things (IoT) applications. To achieve such stringent requirements, we propose to combine persistent connections and TLS session ticket resumption, as in WebSocket (WS) and QUIC, respectively. Considering the nodes of an IoT cluster as a single virtual entity, we propose to integrate an innovative network management strategy, which employs a publish-subscribe (Pub/Sub) architecture based on the Message Queuing Telemetry Transport (MQTT) protocol, for TLS session tickets sharing between cluster nodes to mitigate the session initialization latency. The proposed system is referred to as WS over QUIC and MQTT (WSQM) and its performance is experimentally assessed with IoT-compliant devices. Our results show that WSQM reduces the latency if compared with similar alternatives that rely on Transmission Control Protocol (TCP) and Transport Layer Security (TLS), as well as other QUIC-based protocols such as the HyperText Transfer Protocol version 3 (HTTP/3). Moreover, WSQM achieves minimal resource utilization in terms of percentage of RAM and CPU usage, thus highlighting its ability to meet the critical requirements of IoT applications. | 10.1109/TNSM.2025.3635212 |
| Xin Guo, Lisheng Ma, Wei Su, Xiaohong Jiang | Service Request Recovery Against Satellite Failure in Space-Terrestrial Integrated Networks | 2026 | Vol. 23, Issue | Satellites Low earth orbit satellites Orbits Relays Downlink Uplink Logic gates Satellite constellations Delays Routing Space-terrestrial integrated networks (STINs) single satellite failure service request recovery integrates recovery method | Space-Terrestrial Integrated Networks (STINs) serve as the crucial infrastructure for future 6G Networks, while network failures in STINs pose serious threats to the services provided by such networks. This article focuses on the service request recovery in STINs against a satellite failure. For the uplink requests, downlink requests, and relay requests disrupted by a satellite failure, we explore both independent recovery and joint recovery. In independent recovery where each type of requests is recovered independently and sequentially, an Integer Linear Programming (ILP) model and related heuristic are proposed to identify the optimal recovery solution for each type of requests. We further explore the joint recovery where all requests are recovered jointly and simultaneously to achieve a high recovery efficiency. The ILP formulation and time-efficient heuristic are developed as well for the joint recovery. Finally, extensive numerical results are provided to demonstrate the effectiveness of the joint recovery and proposed heuristics in service recovery under a satellite failure. | 10.1109/TNSM.2025.3617571 |
| Livia Elena Chatzieleftheriou, Jesús Pérez-Valero, Jorge Martín-Pérez, Pablo Serrano | Optimal Scaling and Offloading for Sustainable Provision of Reliable V2N Services in Dynamic and Static Scenarios | 2026 | Vol. 23, Issue | Ultra reliable low latency communication Delays Servers Costs Videos Reliability Vehicle dynamics Computational modeling Central Processing Unit Artificial intelligence Vehicle-to-network V2N ultra-reliable low-latency communications URLLC queueing theory algorithm design optimization problem asymptotic optimality | The rising popularity of Vehicle-to-Network (V2N) applications is driven by the Ultra-Reliable Low-Latency Communications (URLLC) service offered by 5G. Distributed resources can help manage heavy traffic from these applications, but complicate traffic routing under URLLC’s strict delay requirements. In this article, we introduce the V2N Computation Offloading and CPU Activation (V2N-COCA) problem, aiming at the monetary/energetic cost minimization via computation offloading and edge/cloud CPU activation decisions, under stringent latency constraints. Some challenges are the proven non-monotonicity of the objective function and the no-existence of closed-formulas for the sojourn time of tasks. We present a provably tight approximation for the latter, and we design BiQui, a provably asymptotically optimal and computationally efficient algorithm for the V2N-COCA problem. We then study dynamic scenarios, introducing the Swap-Prevention problem, to account for changes in the traffic load and minimize the switching on/off of CPUs without incurring into overcosts. We prove the problem’s structural properties and exploit them to design Min-Swap, a provably correct and computationally effective algorithm for the Swap-Prevention Problem. We assess both BiQui and Min-Swap over real-world vehicular traffic traces, performing a sensitivity analysis and a stress-test. Results show that (i) BiQui is near-optimal and significantly outperforms existing solutions; and (ii) Min-Swap reduces by a $\geq 90\%$ the CPU swapping incurring into just $\leq 0.14\%$ extra cost. | 10.1109/TNSM.2025.3605408 |
| Keke Zheng, Mai Zhang, Mimi Qian, Waiming Lau, Lin Cui | sketchPro: Identifying Top-k Items Based on Probabilistic Update on Programmable Data Plane | 2026 | Vol. 23, Issue | Accuracy Pipeline processing Hardware Telecommunication traffic Switches Probability Probabilistic logic Memory management Random access memory Pipelines Top-k items network measurement P4 programmable data plane | Detecting the top-k heaviest items in network traffic is fundamental to traffic engineering, congestion control, and security analytics. Controller-side solutions suffer from high communication latency and heavy resource overhead, motivating the migration of this task to programmable data planes (PDP). However, PDP hardware (e.g., Tofino ASIC) offers only a few megabytes of on-chip SRAM per pipeline stage and supports neither loops nor complex arithmetic, making accurate top-k detection highly challenging. This paper proposes sketchPro, a novel sketch-based solution that employs a probabilistic update scheme to retain large items, enabling accurate top-k identification on PDP with minimal memory. sketchPro dynamically adjusts the probability of updates based on the current statistical size of the items and the frequency of hash collisions, thus allowing sketchPro to effectively detect top-k items. We have implemented sketchPro on PDP, including P4 software switch (i.e., BMv2) and hardware switch (Intel Tofino ASIC). Extensive evaluation results demonstrate that sketchPro can achieve more than 95% precision with only 10KB of memory. | 10.1109/TNSM.2025.3634742 |
| Kaiyi Zhang, Changgang Zheng, Nancy Samaan, Ahmed Karmouch, Noa Zilberman | Design, Implementation, and Deployment of Multi-Task Neural Networks in Programmable Data-Planes | 2026 | Vol. 23, Issue | Multitasking Artificial neural networks Pipelines Hardware Accuracy Trees (botanical) Software Computational modeling Scalability Machine learning In-network computing P4 multi-task learning neural networks programmable data-planes | The increasing demand for real-time inference on high-volume network traffic has led to the rise of in-network machine learning, where programmable switches execute various models directly in the data-plane at line rate. Effective network management often involves multiple prediction tasks, such as predicting bit rate, flow size, or traffic class; however, existing solutions deploy separate models for each task, placing a significant burden on the data-plane and leading to substantial resource consumption when deploying multiple tasks. To address this limitation, we introduce MUTA, a novel in-network multi-task learning framework that enables concurrent inference of multiple tasks in the data-plane, without exhausting available resources. MUTA builds a multi-task neural network to share feature representations across tasks and introduces a data-plane mapping methodology to fit it within network switches. Additionally, MUTA enhances scalability by supporting distributed deployment, where different layers of a multi-task model can be offloaded across multiple switches. An orchestrator employs multi-objective optimization to determine optimal model placement in multi-path networks. MUTA is deployed on P4 hardware switches, and is shown to reduce memory requirements by $\times 10.5$ , while at the same time improving accuracy by up to 9.14% using limited training data, compared with state-of-the-art single-task learning solutions. | 10.1109/TNSM.2025.3629642 |
| Junbin Liang, Wenkang Li, Victor C. M. Leung | Stateful Virtual Network Function Decomposition and Deployment With Reliability Guarantee in Edge Networks | 2026 | Vol. 23, Issue | Reliability Costs Synchronization Routing Heuristic algorithms Computer network reliability Bandwidth Terminology Software reliability Servers Edge networks stateful VNFs VNF decomposition reliability cost minimization DRL | Edge Networks (ENs) are emerging networks that enable deploying multiple virtual network functions (VNFs) on resource-limited edge servers to provide users with tailored virtual network services. Decomposing a single VNF into multiple thinner replicas can enhance service reliability while inevitably incurring additional computing capacity consumption (e.g., operating system overhead caused by instantiating more replicas), which increases with the number of decomposed replicas. Moreover, redundant backup replicas can be deployed near the replicas to enhance the reliability further. However, the stateful nature of VNFs requires state synchronization among replicas and between replicas and backup replicas, resulting in additional communication traffic. In this paper, we consider a joint strategy for the decomposition and deployment of stateful VNFs with the goal of minimizing total cost while meeting users’ reliability requirements. The total cost includes the computing cost for instantiating replicas and backup replicas, the additional consumption of computing capacity due to VNF decomposition, and the communication cost for routing traffic among users, replicas, and backup replicas. We first formulate the cost minimization problem as an integer nonlinear program and prove that it is NP-hard. Then, we propose an online two-stage scheme to solve this problem, where the first stage is a VNF decomposition algorithm, and the second stage is a deployment algorithm based on deep reinforcement learning (DRL). The former effectively reduces computing cost by iteratively adjusting the number of replicas and backup replicas, while aiding the latter to adaptively minimize communication cost. Extensive experiments demonstrate that our scheme is promising compared to existing state-of-the-art methods. | 10.1109/TNSM.2025.3616185 |
| Nguyen Phuc Tran, Oscar Delgado, Brigitte Jaumard | Proactive Service Assurance in 5G and B5G Networks: A Closed-Loop Algorithm for End-to-End Network Slices | 2026 | Vol. 23, Issue | Resource management 5G mobile communication Quality of service Real-time systems Heuristic algorithms Optimization Dynamic scheduling Security Radio access networks Network slicing 5G network slice resource allocation virtualized network functions (VNFs) quality of service (QoS) proactive resource management closed-loop control dynamic scaling machine learning in 5G and B5G networks | Ensuring the highest levels of performance and reliability for customized services in fifth-generation (5G) and beyond (B5G) networks requires the automation of resource management within network slices. In this paper, we propose PCLANSA, a proactive closed-loop algorithm that dynamically allocates and scales resources to meet the demands of diverse applications in real time for an end-to-end (E2E) network slice. In our experiment, PCLANSA was evaluated to ensure that each virtual network function is allocated the resources it requires, thereby maximizing efficiency and minimizing waste. This goal is achieved through the intelligent scaling of virtual network functions. The benefits of PCLANSA have been demonstrated across various network slice types, including eMBB, mMTC, uRLLC, and VoIP. This finding indicates the potential for substantial gains in resource utilization and cost savings, with the possibility of reducing over-provisioning by up to 54.85%. | 10.1109/TNSM.2025.3635028 |
| Yuanpeng Zheng, Tiankui Zhang, Rong Huang, Yapeng Wang | Joint Computing Offloading and Resource Allocation for Classification Intelligence Tasks in MEC Systems | 2026 | Vol. 23, Issue | Resource management Accuracy Computational modeling Parallel processing Servers Optimization Image edge detection Delays Costs Wireless communication Computing offloading classification intelligence tasks mobile edge computing resource allocation | Mobile edge computing (MEC) facilitates high reliability and low-latency applications by bringing computation and data storage closer to end-users. Intelligent computing is an important application of MEC, where computing resources are used to solve intelligent task-related problems based on task requirements. However, efficiently offloading computing and allocating resources for intelligent tasks in MEC systems is a challenging problem due to complex interactions between task requirements and MEC resources. To address this challenge, we investigate joint computing offloading and resource allocation for classification intelligence tasks (CITs) in MEC systems. Our goal is to optimize system utility by jointly considering computing accuracy and task delay to achieve maximum utility of our system. We focus on CITs and formulate an optimization problem that considers task characteristics including the accuracy requirements and the parallel computing capabilities in MEC systems. To solve the proposed problem, we decompose it into three subproblems: subcarrier allocation, computing capacity allocation and compression offloading. We use successive convex approximation and convex optimization method to derive optimized feasible solutions for the subcarrier allocation, offloading variable, computing capacity allocation, and compression ratio. Based on our solutions, we design an efficient joint computing offloading and resource allocation algorithm for CITs in MEC systems. Our simulation demonstrates that the proposed algorithm significantly improves the performance by 16.4% on average and achieves a flexible trade-off between system revenue and cost considering CITs compared with benchmarks. | 10.1109/TNSM.2025.3632162 |
| Yan Dong, Bin Cao, Zhiyu Wang, Menglan Hu, Chao Cai, Tianyue Zheng, Kai Peng | A Joint Game-Theoretic Approach for Multicast Routing and Load Balancing in LEO Satellite Networks | 2026 | Vol. 23, Issue | Satellites Low earth orbit satellites Multicast algorithms Routing Heuristic algorithms Videos Trees (botanical) Optimization Bandwidth Steiner trees Low earth orbit satellites software defined multicast multicast routing game theory Nash equilibrium | Low Earth Orbit (LEO) satellite networks, with their low latency, high bandwidth, and global coverage, are becoming key technologies for applications like real-time video transmission. As satellite networks expand, effectively managing multicast traffic and optimizing bandwidth utilization have become major challenges for efficient video distribution. Although Software-Defined Multicast (SDM) technology has made progress in bandwidth optimization, existing SDM methods are still focused on constructing Steiner trees, making it difficult to address the dynamic changes and high-load issues in LEO satellite networks. This paper frames the multicast tree construction problem as a Joint Path Optimization Game (JPOG). We propose a Cooperative Game-Theoretic Routing (CGMR) Algorithm based on game theory, which optimizes multicast path selection and achieves load balancing by introducing a link cost-sharing mechanism. Additionally, we propose a two-stage A* path generation algorithm to improve path search efficiency. Theoretically, this paper proves that JPOG is a potential game and can converge to a pure strategy Nash equilibrium (PSNE) within a finite number of iterations. The results showed that JPOG outperformed other algorithms, achieving lower link load, path cost, and superior load balancing, demonstrating its effectiveness in optimizing multicast routing and resource management in large-scale LEO satellite networks. | 10.1109/TNSM.2025.3632925 |