Last updated: 2026-04-04 05:01 UTC
All documents
Number of pages: 160
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Yaru Zhao, Yuanting Yan, Man He, Yuanwei Zhu, Yi Yue, Yakun Huang | EcoPath: Energy-Efficient Multi-Path Data Aggregation for Ubiquitous Connectivity Services | 2026 | Early Access | Ubiquitous connectivity is a key 6G usage scenario, in which large-scale sensing systems deployed in remote and underserved regions must deliver heterogeneous sensing data under stringent energy budgets and deadline constraints. This paper presents EcoPath, a two-tier data aggregation framework for clustered large-scale sensor networks. EcoPath separates low-power intra-cluster collection from a high-rate multi-interface backhaul operated by cluster heads, where Multipath QUIC (MPQUIC) can be practically deployed to exploit path diversity. At the cluster head, EcoPath jointly integrates (i) a deadline-aware bundling controller that aggregates sensor frames into MTU-bounded bundles to amortize protocol overhead while bounding additional waiting time, and (ii) a robust multi-path scheduler that prioritizes packets using Weighted Earliest- Deadline-First (W-EDF) with fairness protection and selects backhaul paths via a stability-aware quality metric with hysteresis to avoid flapping under time-varying links. We further formulate an explicit energy–timeliness optimization and show how its outputs parameterize the online bundling and scheduling policies. Extensive simulations with realistic wireless effects, together with baselines and ablations, demonstrate that EcoPath improves energy efficiency and deadline satisfaction for large-scale aggregation. | 10.1109/TNSM.2026.3680781 | |
| Wangqing Luo, Jinbin Hu, Hua Sun, Pradip Kumar Sharma, Jin Wang | SALB: Security-Aware Load Balancing for Large Language Model Training in Datacenter Networks | 2026 | Early Access | Training Load management Packet loss Throughput Delays Topology Scheduling Telecommunication traffic Fluctuations Switches Datacenter Networks Load Balancing Data Security Deep Reinforcement Learning | To meet the massive compute and high-speed communication demands of Large Language Model (LLM) training, modern datacenters typically adopt multipath topologies such as Fat-Tree and Clos to host parallel jobs across hundreds to thousands of GPUs. However, LLM training exhibits periodic, high-bandwidth communication patterns. Existing load-balancing schemes become misaligned under dynamic congestion and anomalous surges: they struggle to promptly mitigate iteration-peak congestion and lack effective isolation of anomalous traffic. To address this, we propose Security-Aware Load Balancing (SALB) for LLM training. SALB leverages a Deep Reinforcement Learning (DRL) controller with queue and delay signals for packet-level multipath load balancing and employs path binding to confine suspicious flows. By integrating data security into load balancing, SALB simultaneously achieves high throughput and robust traffic isolation. NS-3 simulation results show that, compared with CONGA, Hermes, and ConWeave, SALB reduces the 99th-percentile flow completion time (FCT) of short flows by an average of 65% and increases the throughput of long flows by an average of 54%. It further outperforms the baselines in aggregate throughput, path utilization, and packet loss rate, thereby significantly enhancing system stability, robustness, and data security. | 10.1109/TNSM.2026.3678979 |
| Kang Liu, Jianchen Hu, Donglai Ma, Xiaoyu Cao, Yuzhou Zhou, Lei Zhu, Li Su, Wenli Zhou, Xueqi Wu, Feng Gao | Topology-Aware Virtual Machine Placement through the Buffer Migration Mechanism | 2026 | Early Access | Central Processing Unit Filtering Filters Electronic circuits Circuits Circuits and systems Feedback Cloud computing Radio access networks Regional area networks Buffer management Optimization Topology-aware VM Placement | The virtual machine (VM) placement considering the topology constraints is difficult because the unpredictable topological VMs raise additional structural requirements (including the affinity, anti-affinity and fault-domain) on the resource pool. Thus, the service level agreement (SLA) can be violated even when the occupancy of the resource pool is quite modest. In order to solve this problem, we propose an efficient buffer-migration-based heuristic online algorithm. First, we build an integer programming model for the topology-aware VM placement problem. Second, we propose a hierarchical resource-preserving online approach, where the Rack and physical machine (PM) nodes are selected in the upper and lower layers respectively. Finally, we utilize the buffer to place and migrate the unfitted VMs to enhance the capacity of the resource pool. The proposed approach is tested with high proportional topological VM requests (nearly 60%) in the resource pool with the scale of 500, 1000 and 1500 PMs. The results show that our online approach (with unknown upcoming VM information) can achieve more than 85% of the performance for the offline approach (with complete upcoming VM information). The latency is lower than 5ms per VM. | 10.1109/TNSM.2026.3678976 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Basharat Ali, Guihai Chen | MIRAGE-DoH: Metamorphic Intelligence and Resilient AI Grid for Autonomous Governance of Encrypted DNS | 2026 | Early Access | Cryptography Domain Name System Fingerprint recognition Accuracy Metadata Artificial intelligence Software Perturbation methods Network security Monitoring Network Security Network Protocol Enhancing Encrypted Network Security Cyber Threats Detection Anomaly Detection Attack Detection Traffic Classification Quantum ML in Encryted DNS | Existing DNS over HTTPS defenses have demonstrated limited resilience against polymorphic traffic shaping, staged tunneling, and adaptive mimicry, largely because they rely on static learning pipelines and rigid cryptographic configurations. MIRAGE-DoH was designed to examine whether adaptive inference, persistent structural encoding, and calibrated cryptographic agility could be integrated into a deployable and measurable encrypted DNS control architecture. The framework combined flow-level Cognitive MetaAgents capable of internal reconfiguration, Topological Memory Networks that preserved stable geometric irregularities across temporal windows, and Metamorphic Cryptographic Shards that adjusted key encapsulation policies according to empirically calibrated threat severity. A Causal Counterfactual Environment modeled constrained attacker decision pathways, while Spectral Game Intelligence analyzed flow interaction graphs to anticipate structural attack transitions.Evaluation on extended CIC-DoH2023 and Gen-C-DDD-2022 datasets was conducted under fixed flow-level decision intervals with explicit accounting for synchronization overhead, spectral graph construction cost, and cryptographic rotation latency. Cross-dataset experiments yielded a mean detection accuracy of 97.8% with a 0.41% false positive rate, sustaining median inference latency of 62μs and cryptographic morph latency of 3.7 ms under load. Quantum-assisted inference was assessed through bounded simulations, indicating constrained information gain within the adopted lattice-based configuration, without asserting unconditional post-quantum immunity. These results demonstrated that adaptive encrypted DNS governance can be empirically grounded, operationally bounded, and stress-evaluated without reliance on unqualified claims of perfect security. | 10.1109/TNSM.2026.3677474 |
| Quang-Trung Luu, Minh-Thanh Nguyen, Tuan-Anh Do, Michel Kieffer, Tai Hung Nguyen, Huu Thanh Nguyen, Van-Dinh Nguyen | Network Slicing with Flexible VNF Order: A Branch-and-Bound Approach | 2026 | Early Access | Network slicing is a critical feature in 5G and beyond communication systems, enabling the creation of multiple virtual networks (i.e., slices) on a shared physical network infrastructure. This involves efficiently mapping each slice component, including virtual network functions (VNFs) and their interconnections (virtual links), onto the physical network. This paper considers the slice embedding problem in which the order of VNFs can be adjusted. This provides increased flexibility for service deployment, but the selection of the best order of VNFs also complicates embedding. We propose an optimization framework to tackle the challenges of jointly optimizing slice admission control and embedding with flexible VNF ordering. Additionally, we introduce a near-optimal branch-and-bound (BnB) algorithm, combined with the A* search algorithm, to generate embedding solutions efficiently. Extensive simulations on both small and large-scale multi-tiered 5G networks demonstrate that flexible VNF ordering increases the number of deployable slices within a network infrastructure, thereby improving resource utilization and better meeting diverse demands across varied network topologies. | 10.1109/TNSM.2026.3680673 | |
| Zaifeng Lin, Xia Gong, Yongqing Zhu, Bing Yang, Ke Ruan | PoD-BNG for Mega-Metropolitan Networks: Reliable and Scalable Broadband Network Gateway Evolution | 2026 | Early Access | Given the wide range of growing online commerce, reliable broadband access has become essential to connecting households and business. In the past twenty years, several solutions have been developed for this purpose, from BNG to vBNG and to BNG-CUPS most recently. However, existing solutions suffer from constrained scalability, inefficient hardware utilization, and an absence of robust fault tolerance. Consequently, these deficiencies have significantly hindered the deployment of carrier-grade access networks in densely populated areas. In response to this, we proposed Pooled Disaggregated Broad-band Gateway (PoD-BNG), an architecture featuring a shared user plane resource pool and an integrated warm standby topology for seamless failover. We validate its performance using a probabilistic model parameterized with real-world operational data. Our analysis shows that a “3+1” standby topology offers the optimal balance of cost and resilience, increasing hardware utilization by 45% and reducing operational and energy costs by 70% compared to legacy BNGs. Furthermore, the architecture is fundamentally fault-resistant, with the probability of a catastrophic outage being less than one in a billion. The success of PoD-BNG is validated by its large-scale deployment since 2020, serving over 46 million households by the end of 2024, establishing it as a superior solution for next-generation access networks. | 10.1109/TNSM.2026.3680529 | |
| Hasanin Harkous, Ahan Kak, Alistair Urie, Heiko Straulino, Huanzhuo Wu, Huu-Trung Thieu, Nakjung Choi | Flat UP: A Converged RAN-Core Architecture for the 6G User Plane | 2026 | Early Access | The ongoing industry shift toward Radio Access Network (RAN) disaggregation, virtualization, and cloudification has disrupted the conventional hierarchical design of cellular networks and opened the door to greater convergence between the RAN and core domains. Despite this progress, implementing such converged architectures in practice presents numerous challenges, including those related to protocol and architectural design, quality-of-service (QoS) assurance, control plane configuration, and support for emerging 6G-specific use cases. To address these challenges, this article presents the flat User Plane (UP) architecture, a novel framework for RAN-core convergence centered around a new 6G-native component: the Access User Plane Function (AUPF). The article outlines the key innovations in the newly proposed flat user plane architecture, including protocol- and feature-level design evolutions as well as enhancements to QoS provisioning. It then explores various counterpart Control Plane (CP) architectures, analyzing the impact of the new design on different 3GPP CP procedures. A concrete, system-level prototype implementation of the AUPF is developed, accompanied by a comprehensive over-the-air evaluation to assess both fundamental network performance metrics and user plane Quality of Experience (QoE). Additionally, multiple deployment models are examined to quantify the CP signaling overhead associated with different architectural options. The results demonstrate that the proposed flat UP architecture not only improves throughput, latency performance and QoE but also reduces overall compute resource utilization when compared to the conventional hierarchical 5G user plane. The CP evaluation further provides practical insights and guidelines for real-world deployment scenarios. | 10.1109/TNSM.2026.3680720 | |
| Amin Mohajer, Abbas Mirzaei, Mostafa Darabi, Xavier Fernando | Joint SLA-Aware Task Offloading and Adaptive Service Orchestration with Graph-Attentive Multi-Agent Reinforcement Learning | 2026 | Early Access | Quality of service Resource management Observability Training Delays Job shop scheduling Dynamic scheduling Bandwidth Vehicle dynamics Thermal stability Edge intelligence network slicing QoS-aware scheduling graph attention networks adaptive resource allocation | Coordinated service offloading is essential to meet Quality-of-Service (QoS) targets under non-stationary edge traffic. Yet conventional schedulers lack dynamic prioritization, causing deadline violations for delay-sensitive, lower-priority flows. We present PRONTO, a multi-agent framework with centralized training and decentralized execution (CTDE) that jointly optimizes SLA-aware offloading and adaptive service orchestration. PRONTO builds on Twin Delayed Deep Deterministic Policy Gradient (TD3) and incorporates spatiotemporal, topology-aware graph attention with top-K masking and temperature scaling to encode neighborhood influence at linear coordination cost. Gated Recurrent Units (GRUs) filter temporal features, while a hybrid reward couples task urgency, SLA satisfaction, and utilization costs. A priority-aware slicing policy divides bandwidth and compute between latency-critical and throughput-oriented flows. To improve robustness, we employ stability regularizers (temporal smoothing and confidence-weighted neighbor alignment), mitigating action jitter under bursts. Extensive evaluations show superior QoS and channel utilization, with up to 27.4% lower service delay and over 18% higher SLA Satisfaction Rate (SSR) compared with strong baselines. | 10.1109/TNSM.2026.3673188 |
| Yang Lu, Yuhong Zhang, Yan Zheng, Ruifeng Zhu, Wei Xiang | Adaptive Gossip-Enhanced SIR Models for Real-Time Routing Optimization and Fault Tolerance in Distributed Networks | 2026 | Early Access | Large-scale distributed communication networks have been widely recognized as the backbone for efficient data dissemination in cloud and IoT systems, serving as the foundation for dynamic routing, load balancing, and synchronization. In the specific scenario of real-time routing, traditional static approaches fail to adapt to variable communication delays, congestion, and node failures, resulting in high average end-to-end delay and suboptimal performance. In this paper, we propose the adaptive routing and fault tolerance protocol (ARFTP), a unified framework that leverages adaptive Susceptible-Infectious-Recovered (SIR) model applications and dynamic gossip algorithms to minimize the average end-to-end delay while ensuring load balancing and fault tolerance in large-scale distributed networks. Our approach employs an adaptive weight mechanism that continuously updates routing decisions based on real-time congestion and delay feedback, integrating recursive load feedback and a backup routing strategy to achieve efficient information dissemination. Experimental results demonstrate that the ARFTP reduces the average end-to-end delay to 0.15 s, increases throughput to 0.92, and significantly improves load balancing and fault tolerance compared to conventional static routing methods. | 10.1109/TNSM.2026.3680838 | |
| Yuchen Wu, Muyu Mei, Li Feng, Jiangtao Wang, Chunhui Feng, Xu Bao, Mingwu Yao | Deep Reinforcement Learning-based Cluster Selection for Network-layer Performance Guarantee in Federated Learning | 2026 | Early Access | Federated learning (FL) is a privacy-preserving technique that enables local model training on devices without raw data sharing. However, a critical challenge in FL lies in the communication requirement of uploading the trained models to servers, which can be hindered by interference from ambient devices, particularly in unreliable wireless environments. To address this, hierarchical FL (HFL) introduces an additional intermediate layer where the edge server performs work aggregation from the devices nearby, aiming at reducing the communication load and improving the efficiency of model training. However, existing approaches suffer from two critical limitations. First, they fail to fully quantify the impact of device competition-induced interference on transmission performance, which leads to unacceptably high upload latency and low success upload probability (SUP). Second, they lack a targeted optimization strategy to balance model accuracy and transmission efficiency under dynamic interference conditions. To address these critical limitations and mitigate their adverse impacts on FL performance, we take these gaps as the core motivation of our work and propose a targeted solution. Specifically, we first model the network as a two-layer binomial point process (BPP), which allows us to analyze the network-layer performance and calculate the SUP for the trained model. Based on this model, we propose optimizing cluster selection to balance accuracy and latency, thereby enhancing overall FL performance. We formulate this optimization as a Markov decision process (MDP) and solve it using a twin-delayed deep deterministic policy gradient (TD3)-based cluster selection algorithm (CS-TD3). In addition, to guarantee network-layer performance and enhance the efficiency of HFL, we employ an experimental exhaustive search algorithm to find the best solution within a limited range. The experimental results show that our algorithm overperforms other commonly-used algorithms in terms of HFL accuracy and model transmission latency, achieving a 10.95% improvement over the other methods. | 10.1109/TNSM.2026.3680350 | |
| Yifei Xie, Zhi Lin, Kefeng Guo, Ruiqian Ma, Hussam Al Hamadi, Fatima Asiri, Ahlam Almusharraf | Lightweight Learning for Symbiotic Secure and Efficient ISAC in RIS-assisted Intelligent Transportation Networks | 2026 | Early Access | Achieving real-time processing in integrated sensing and communication (ISAC) systems presents significant challenges due to the high computational burden of conventional optimization methods, particularly within intelligent transportation networks (ITN). This paper addresses these challenges by proposing lightweight supervised and unsupervised deep learning (DL) algorithms, respectively for quasi-static and dynamic environments, aiming to improve the secrecy energy efficiency (SEE) of ITN under the constraints of the Cram´er-Rao bound (CRB) for direction-of-arrival (DOA) estimation and the transmission rate of each user. By jointly optimizing power allocation and reconfigurable intelligent surface (RIS) phase shifts, the framework ensures robust physical layer security (PLS) alongside communication efficiency, aligning with defense-in-depth strategies for securing next-generation ITN. For quasi-static environments, a supervised deep neural network (DNN) algorithm leverages offline codebook-generated labels to achieve near-optimal channel state information (CSI) mapping, explicitly minimizing signal leakage to eavesdroppers. In dynamic scenarios, an unsupervised channel attention mechanism-based residual network (CAM-ResNet) eliminates labeling overhead through direct physics-informed SEE optimization with adaptive constraint enforcement, enabling real-time adaptation to rapidly varying channels and evolving security threats. Simulation results demonstrate that both algorithms achieve comparable SEE performance with the zero-forcing (ZF) method, while significantly reducing computational complexity, with the CAM-ResNet demonstrating superior resilience to dynamic security threats. This work contributes to advancing secure and efficient ISAC solutions, reinforcing multi-layered defense mechanisms critical for future ITN. | 10.1109/TNSM.2026.3679370 | |
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Xiaolong Wang, Haipeng Yao, Lin Zhu, Wenji He, Wei Zhang, Mohsen Guizani | Joint Optimization of Routing and Scheduling in Cross-Domain Deterministic Networks | 2026 | Early Access | Industrial Internet applications require networks to guarantee deterministic end-to-end latency and zero packet loss at both the data link and network layers. Traditional best-effort communication models in consumer networks are insufficient to meet these stringent demands. To meet these stringent demands, the IEEE 802.1 standards introduce Time-Sensitive Networking (TSN) at the data link layer, while the IETF proposes Deterministic Networking (DetNet) for the network layer. However, enabling seamless cross-domain communication between TSN and DetNet remains a significant challenge. This paper proposes a unified cross-domain network architecture and a time-slot alignment strategy that compensates for synchronization errors between the TSN and DetNet layers. We further develop a Joint Routing and Scheduling algorithm for Deterministic Cross-Domain Transmission (JRS-DCT), which simultaneously addresses routing and scheduling under cross-domain constraints. The algorithm leverages Cycle-Specified Queuing and Forwarding (CSQF) in DetNet and Cycle Queuing and Forwarding (CQF) in TSN to ensure bounded latency and deterministic transmission. Extensive simulations demonstrate that the proposed JRS-DCT algorithm significantly improves the scheduling success rate and effectively reduces network resource utilization compared to two baseline algorithms. These results validate the effectiveness and robustness of the proposed framework in supporting time-sensitive communication across heterogeneous network environments. | 10.1109/TNSM.2026.3679810 | |
| Julien Ali El Amine, Nour El Houda Nouar, Olivier Brun | Online Network Slice Deployment across Multiple Domains under Trust Constraints | 2026 | Early Access | Network slicing across multiple administrative domains raises two coupled challenges: enforcing slice-specific trust constraints while enabling fast online admission and placement decisions. This paper considers a multi-domain infrastructure where each slice request specifies a VNF chain, resource demands, and a set of (un)trusted operators, and formulates the problem as a Node–Link (NL) integer program to obtain an optimal benchmark, before proposing a Path–Link (PL) formulation that pre-generates trust and order-compliant candidate paths to enable real-time operation. To mitigate congestion, resource prices are made dynamic using a Kleinrock congestion function, which inflates marginal costs as utilization approaches capacity, steering traffic away from hotspots. Extensive simulations across different congestion levels and slice types show that: (i) PL closely tracks NL with negligible gaps at low load and moderate gaps otherwise, (ii) dynamic pricing significantly reduces blocking under scarce resources, and (iii) PL reduces computation time by about 3×–6× compared to NL, remaining within a few seconds even at high load. These results demonstrate that the proposed PL and dynamic pricing framework achieves near-optimal performance with practical runtime for online multi-domain slicing under trust constraints. | 10.1109/TNSM.2026.3679794 | |
| Luyao Jiang, Xinguo Ming, Mengli Wei | Privacy Decentralized Online Federated Learning for Smart Healthcare Service Systems | 2026 | Early Access | The Smart Healthcare Service Systems (SHSS) aim to integrate decentralized healthcare institutions, intelligent technologies, and end users into a cyber-physical system that enables high-quality medical decision-making. However, the sensitive nature of healthcare data presents significant privacy and security challenges, which hinder effective collaboration among healthcare providers. Moreover, existing research lacks a comprehensive theoretical framework that spans the full pipeline from data acquisition to intelligent decision services. To address these challenges, we propose a theoretical framework for SHSS that systematically analyzes data processing and user demand to establish the goals of secure, stable, and adaptive collaborative learning. Guided by these goals, a Decentralized Online Federated Learning (DOFL) network model is tailored for SHSS, where participating institutions interact through a decentralized federated learning structure. Building on this model, we design DP-DOOR (Differentially Private Decentralized Online Federated Learning with One-Point Residual Feedback), a fully decentralized algorithm that supports row-stochastic communication topologies, accommodating practical limitations where bidirectional synchronization is often infeasible. DP-DOOR ensures data privacy through differential privacy (DP) mechanisms and achieves efficient gradient estimation using a one-point residual feedback (OPRF) approach. Theoretical analysis shows that DP-DOOR provides ϵ-DP guarantees and achieves sub-linear regret. Experimental evaluations on diverse real-world medical datasets under both IID and non-IID settings demonstrate the algorithm’s robustness and effectiveness in enabling secure, decentralized collaboration and enhancing adaptability in dynamic healthcare environments. | 10.1109/TNSM.2026.3680310 | |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Xinyue Zhang, Xuan Zhou, Jie Ma, Zeqi Li, Feng He | Interference-Aware Multi-Metric Delay Evaluation and Optimization for Switched Networks | 2026 | Early Access | Switched networks are essential to modern real-time systems, where packet delays must be tightly bounded with minimal variation. Traditional delay analysis often focuses on worst-case bounds, but may overlook delay jitter induced by fine-grained inter-flow interference, which can degrade real-time performance and stability. Existing routing schemes typically rely on proxy indicators such as link load or path length, offering limited explicit control over delay and jitter behavior. To address these limitations, we propose an interference-aware delay evaluation and optimization framework that models the encounter interval and magnitude of flow interference at the packet level. From this, we derive worst-case delay, average delay, and delay jitter, and integrate these metrics into a unified, tunable optimization objective. We design a K-shortest-path genetic algorithm to jointly reduce them. Experimental results over multiple traffic loads demonstrate consistent improvements in delay and jitter performance, indicating that the proposed approach is scalable and practical for delay-sensitive and stability-critical switched networks. | 10.1109/TNSM.2026.3680250 | |
| Yang Liu, Wenjun Zhu, Harry Chang, Yang Hong, Geoff Langdale, Kun Qiu, Jin Zhao | Hyperflex: A SIMD-Based DFA Model for Deep Packet Inspection | 2026 | Vol. 23, Issue | Single instruction multiple data Vectors Engines Automata Inspection Payloads Throughput Memory management Compression algorithms Software algorithms Deep packet inspection regular expression deterministic finite automata | Deep Packet Inspection (DPI) has been extensively employed for network security. It examines traffic payloads by searching for regular expressions (regex) with the Deterministic Finite Automaton (DFA) model. However, as the network bandwidth and ruleset size are increasing rapidly, the conventional DFA model has emerged as a significant performance bottleneck of DPI. Leveraging the Single-Instruction-Multiple-Data (SIMD) instruction to perform state transitions can substantially boost the efficiency of the DFA model. In this paper, we propose Hyperflex, a novel SIMD-based DFA model designed for high-performance regex matching. Hyperflex incorporates a region detection algorithm to identify regions suitable for acceleration by SIMD instructions across the whole DFA graph. Also, we design a hybrid state transition algorithm that enables state transition in both SIMD-accelerated and normal regions, and ensures seamless state transition across the two types of regions. We have implemented Hyperflex on the commodity CPU and evaluated it with real network traffic and DPI regexes. Our evaluation results indicate that Hyperflex reaches a throughput of 8.89Gbit/s, representing an improvement of up to 2.27 times over Mcclellan, the default DFA model of the prominent multi-pattern regex matching engine Hyperscan. As a result, Hyperflex has been successfully deployed in Hyperscan, significantly enhancing its performance. | 10.1109/TNSM.2025.3636946 |
| Sheng-Wei Wang, Show-Shiow Tzeng | An Accurate and Efficient Analytical Model for Security Evaluation of PoW Blockchains With Multiple Independent Selfish Miners | 2026 | Vol. 23, Issue | Blockchains Analytical models Accuracy Security Computational modeling Numerical models Bitcoin Consensus protocol Proof of Work Closed-form solutions Blockchain selfish mining attack Markov chain rewards analysis | Selfish mining poses significant security challenges to Proof-of-Work (PoW) blockchains by allowing strategic miners to gain disproportionate rewards through protocol deviation. While the impact of a single selfish miner has been extensively studied, the security implications of multiple independent selfish miners remain insufficiently understood. This paper presents an accurate and efficient analytical model for security evaluation of PoW blockchains under multiple independent selfish mining behaviors. The blockchain dynamics are modeled as a Markov chain with a novel state aggregation approximation, enabling closed-form estimation of miner rewards. Numerical results show that the proposed model achieves high accuracy, with deviations typically less than 5.09% compared to simulations in a blockchain with two selfish miners. In a blockchain with more than two selfish miners, the proposed analytical model yields more accuracy approximation leading to less than 2% error. We also propose a truncation mechanism to reduce the number of states in the proposed Markov chain. Numerical results show that the proposed analytical model with truncation significantly reduce the computation time while the accuracy is still maintained. Two use cases are presented: determining the profitable threshold of total selfish mining power and analyzing reward dis-proportionality between strong and weak selfish miners. The proposed model provides a practical framework for quantifying incentive-driven security risks and evaluating their impact on blockchain fairness and decentralization. | 10.1109/TNSM.2025.3637840 |