Last updated: 2026-05-12 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Shahid Mahmood, Moneeb Gohar, Seok Joo Koh | Globally Integrated Trust Authority (GITA) for Resource-Constrained Edge Devices in IoT and 6G | 2026 | Early Access | Payloads Filtering Central Processing Unit Filters Feedback Circuits Electronic circuits Microcontrollers Circuits and systems Microprocessors GITA Globally Integrated Trust Authority Network PKDL TSL LMS Security Trust Management Resource Constrained Edge Device Internet of Things and Cyber-Attack | The rapid growth of the Internet and the increasing number of edge devices have expanded the cyber-attack surface at the edge layer. Hackers exploit vulnerabilities at various levels of a network by either directly connecting to it or accessing it over the Internet. In both scenarios, edge devices remain a primary target due to their widespread use, limited resources and critical impact. Therefore, securing edge devices is essential to counter both local and global cyber threats. Trust is a key factor in determining the level of protection required for edge devices. It can be used to assess the reliability of other devices before offering or requesting services. Since edge devices are often globally interconnected, trust levels should be verifiable across the Internet and intranet. In this paper, we propose the Globally Integrated Trust Authority (GITA), a framework that distributes verifiable trust values across networks and the Internet while minimizing communication overhead. Experimental results demonstrate that GITA improves the efficiency of trust value distribution and verification among nodes compared to digital certificates, while maintaining the same level of protection.. This approach enables effective identification of malicious and benign nodes, enhancing the precision of malicious node detection locally and globally. | 10.1109/TNSM.2026.3687967 |
| Xinshuo Wang, Baihua Chen, Lei Liu, Yifei Li | Pisces: Fast Loss Recovery for Multipath Transmission in RDMA | 2026 | Early Access | Payloads Military aircraft Space technology Feeds System-on-chip Field programmable gate arrays Circuits Application specific integrated circuits Integrated circuits Feedback RDMA Loss Recovery Multipath Transmission Programmable Switch Programmable NIC FPGA | Conventional Remote Direct Memory Access (RDMA) relies on Priority Flow Control (PFC) to operate on lossless networks. However, as data centers scale, PFC’s drawbacks, such as head-of-line blocking and congestion spreading become increasingly problematic. This study proposes Pisces, a fast packet loss recovery scheme that leverages terminal–network collaboration. Instead of targeting lossless RDMA networks, Pisces enables high-throughput RDMA by efficiently handling loss recovery. To address the inefficient retransmission problems of PFC+Go-Back-N and the challenges of configuring appropriate timeouts for Selective Repeat (SR) in multipath transmission scenarios, Pisces implements Quick Drop Notification (QDN) of packet loss on switches, avoiding bandwidth waste and timeouts. In addition, Pisces RDMA NICs feature on-chip packet buffers to cache in-flight packets, supporting the scalability demands of RDMA in modern data centers. Upon receiving a QDN, lost packets are quickly retrieved from the buffer for retransmission, significantly improving retransmission efficiency and reducing PCIe bandwidth waste caused by cache replacements. This study overcame numerous challenges to implement Pisces prototype, which demonstrated excellent performance. Testbed experiments show that Pisces improves the 99th-percentile FCT by 130×compared to Mellanox CX-6. Large-scale simulations demonstrate that Pisces achieves a maximum reduction of 82.8% in the 99.9th-percentile FCT compared to SR and other state-of-the-art technologies. | 10.1109/TNSM.2026.3688038 |
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/km2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Frequency modulation Radio broadcasting Filtering Filters Memory modules Virtual private networks Encrypted traffic classification Class incremental learning Neural collapse | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Willie Kouam, Yezekael Hayel, Gabriel Deugoué, Charles Kamhoua | Decoy Allocation against Lateral Movement - A Network Centrality Game Approach | 2026 | Early Access | Circuits Feedback Network topology Reconnaissance Communication systems Radio access networks Regional area networks Routing Military communication Computer networks Lateral movement Cyber deception Centrality measure One-sided POSGs | Targeted incidents increasingly threaten internal security with a rise in data breaches and service disruptions. Attackers now employ sophisticated approaches, infiltrating systems for ongoing access to critical information through lateral movement. Detecting and defending against such intrusions is challenging due to commonly exploited specific vulnerabilities. Consequently, various deception techniques have emerged over time, aiming to divert attackers’ attention. In our scenario, attackers use lateral movement within the network to reach a specific target, while defenders strategically deploy decoys to counteract them. Such a dynamic and adversarial interaction is modeled as a one-sided partially observable stochastic game (OS-POSG). Several solutions have been proposed to address this challenge, particularly when the attacker possesses complete knowledge of the network’s topology through the recognition stage. Simultaneously, recent years saw the development of approaches to obscure the attackers’ reconnaissance phase, compelling them to operate without a full comprehension of the network’s structure. We therefore introduce an innovative methodology involving intelligent players who take into account the importance of network devices, assessed by centrality measures, during the decision-making process. This approach aims to improve the effectiveness of the defender’s strategy to counter the attacker’s lateral movement in the network, in the context of step-by-step optimization. | 10.1109/TNSM.2026.3689344 |
| Lal Verda Cakir, Mehmet Ali Erturk, Mehmet Ozdem, Berk Canberk | Digital Twin-assisted Handover Scheme for Mobile Networks using Generative AI | 2026 | Early Access | Electromagnetic propagation Propagation constant Radio broadcasting Radio networks Handover Communication systems Avatars Communication switching Data transfer Cellular networks digital twin 5G/6G handover management generative artificial intelligence | Handover management in mobile networks is challenged by high latency and reduced reliability in dense deployments and under user mobility. Here, existing schemes improve handover initiation by optimising the candidate handover at the decision time. However, these are applied after a non-negligible delay due to the control-plane signalling. Then, when applied, it may become invalid or degrade performance. To address this, we propose a Digital Twin (DT)-assisted handover scheme that performs predictive execution-time validation prior to the preparation of the Next Generation (NG)-based handover. To this end, the DT-What-If Generator (DT-WIG) is used to emulate short-horizon future network states under uncertainty. Here, the DT-WIG is a spatiotemporal graph generative model that uses variational latent sampling to generate counterfactual post-handover trajectories for the candidate handover decision. Then, the AMF estimates the failure and QoS risks associated with the candidate handover and approves/rejects it via standard-compliant signalling. With this, we form a policy-agnostic mechanism that runs on the underlying handover policy. Consequently, we evaluate performance using ns-3/5G-LENA trace generation and replay-based policy analysis, with OpenAirInterface-based signalling evaluation. The results show that the proposed method reduces the handover failure rate and handover interruption time while improving latency, jitter, throughput, and packet loss. | 10.1109/TNSM.2026.3690572 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Aerospace and electronic systems Telemetry Central Processing Unit Microcontrollers Microprocessors MIMICs Millimeter wave integrated circuits Monolithic integrated circuits Communication systems Internet of Things EDHOC OSCORE Key agreement Authentication extended Canetti–Krawczyk (eCK) attack model | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Jiale Zhu, Xiaoyao Zheng, Shukai Ye, Ming Zheng, Liping Sun, Liangmin Guo, Qingying Yu, Yonglong Luo | Federated Recommendation Model Based on Personalized Attention and Privacy-Preserving Dynamic Graph | 2026 | Early Access | Modeling Federated learning Privacy Recommender systems Training Educational institutions Servers Algorithms Conferences Graph neural networks Graph Neural Networks Federated Learning Personalized Recommendation Privacy Protection | Graph Neural Networks (GNNs) have been widely adopted in recommendation systems. When integrated into a federated learning framework, GNNs can enhance the model’s expressive capability. However, challenges arise in personalized representation and graph expansion due to the heterogeneity and locality of user data in federated recommendation systems. To address these challenges, we propose a federated recommendation model based on personalized attention and privacy-preserving dynamic graphs. The method first matches neighbor users for each selected client. Subsequently, it counts the interaction frequencies of items for both local and neighbor users to construct personalized weights, which captures the unique characteristics of different users. Additionally, we designs a method for constructing privacy-preserving dynamic graphs. In each round of federated training, the selected client adds pseudo-interaction items to its own interaction subgraph, perturbing the real interactions. After completing local training, the noisy interaction subgraph is incorporated into the global graph to capture higher-order connectivity information among users while safeguarding their interaction privacy. We conduct extensive experiments on three benchmark datasets, and the results demonstrate that the proposed PADG method achieves superior performance while effectively protecting privacy. | 10.1109/TNSM.2026.3691659 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Atri Mukhopadhyay, Dinesh Korukonda, Goutam Das | Design of Passive Optical Network Based O-RAN X-haul: A Systematic Approach | 2026 | Early Access | The development of high data rate communication technologies has resulted in cell densification, which in turn has led to the development of centralized radio access networks (C-RANs) followed by open radio access networks (O-RANs). The O-RAN segregates the base station into three logical entities; the central unit (CU), the distributed unit (DU) and the radio unit (RU). The CU, DU and RU require low latency, low jitter and high data rate connections for seamless operation, which is known as X-haul. A passive optical network (PON) is a potential solution for X-haul design. However, conventional PON uplink protocols are not inherently suitable for X-haul requirements. The packetization procedure of PON introduces jitter to the X-haul bit stream. Further, the delay requirements of the X-haul limit the number of sources that can be connected to the X-haul. Advanced features like coordinated multipoint requires synchronization among the different X-haul bit streams as well. Therefore, in this paper, we develop an optimal uplink system that allows PON to be used as an X-haul connection technology. The proposal maximizes the throughput of the PON while conforming to the delay and synchronization requirements. Moreover, the proposal nullifies the jitter introduced by the PON scheduler. We have performed extensive simulations for verifying our results. | 10.1109/TNSM.2026.3692242 | |
| Jack Wilkie, Hanan Hindy, Craig Michie, Christos Tachtatzis, James Irvine, Robert Atkinson | A Novel Contrastive Loss for Zero-Day Network Intrusion Detection | 2026 | Vol. 23, Issue | Contrastive learning Anomaly detection Training Autoencoders Training data Detectors Data models Vectors Telecommunication traffic Network intrusion detection Internet of Things network intrusion detection machine learning contrastive learning | Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e., other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches. | 10.1109/TNSM.2026.3652529 |
| Ze Wei, Rongxi He, Chengzhi Song, Xiaojing Chen | Differentiated Offloading and Resource Allocation With Energy Anxiety Level Consideration in Heterogeneous Maritime Internet of Things | 2026 | Vol. 23, Issue | Internet of Things Resource management Carbon footprint Servers Reviews Packet loss Heterogeneous networks Green energy Delays Anxiety disorders Mobile edge computing task offloading resource allocation carbon footprint minimization | The popularity of maritime activities not only exacerbates the carbon footprint (CF) but also places higher demands on Maritime Internet of Things (MIoTs) to support heterogeneous MIoT devices (MIoTDs) with different prioritized tasks. High-priority tasks can be processed cooperatively via local computation, offloading to nearby MIoTDs (helpers), or offloading to edge servers to ensure their timely and successful completion. Due to the differences in energy availability and rechargeability, MIoTDs exhibit distinct energy states, impacting their operational behaviors. We propose the Energy Anxiety Level (EAL) to quantify these states: Higher EAL tends to lead to increased packet dropping and earlier shutdown. Although low-EAL MIoTDs seem preferable as helpers, their scarce residual computational resources after local task completion may cause offloaded high-priority tasks to drop or time out. Therefore, helper selection should jointly consider candidate MIoTDs’ EALs and loads to evaluate their unsuitability. This paper addresses the problem of differentiated task offloading and resource allocation in MIoTs by formulating it as a mixed integer nonlinear programming model. The objective is to minimize system-wide carbon footprint (CF), packet loss, helper unsuitability risk, and high-priority task latency. To solve this complex problem, we decompose it into two subproblems. We then design algorithms to determine optimal offloading patterns, task partitioning factors, MIoTD transmission powers, and computation resource allocation for MIoTDs and edge servers. Simulation results demonstrate that our proposal outperforms benchmarks in reducing CF and EAL, lowering high-priority task latency, and improving task completion ratio. | 10.1109/TNSM.2026.3655385 |
| Apurba Adhikary, Avi Deb Raha, Yu Qiao, Md. Shirajum Munir, Mrityunjoy Gain, Zhu Han, Choong Seon Hong | Age of Sensing Empowered Holographic ISAC Framework for nextG Wireless Networks: A VAE and DRL Approach | 2026 | Vol. 23, Issue | Array signal processing Resource management Integrated sensing and communication Wireless networks Phased arrays Hardware Arrays Real-time systems Metamaterials 6G mobile communication Integrated sensing and communication age of sensing holographic MIMO deep reinforcement learning artificial intelligence framework | This paper proposes an AI framework that leverages integrated sensing and communication (ISAC), aided by the age of sensing (AoS) to ensure the timely location updates of the users for a holographic MIMO (HMIMO)-assisted base station (BS)-enabled wireless network. The AI-driven framework aims to achieve optimized power allocation for efficient beamforming by activating the minimal number of grids from the HMIMO BS for serving the users. An optimization problem is formulated to maximize the sensing utility function, aiming to maximize the communication signal-to-interference-plus-noise ratio (SINR ${_{c}}$ ) of the received signals and beam-pattern gains to improve the sensing SINR of reflected echo signals, which in turn maximizes the achievable rate of users. A novel AI-driven framework is presented to tackle the formulated NP-hard problem that divides it into two problems: a sensing problem and a power allocation problem. The sensing problem is solved by employing a variational autoencoder (VAE)-based mechanism that obtains the sensing information leveraging AoS, which is used for the location update. Subsequently, a deep deterministic policy gradient-based deep reinforcement learning scheme is devised to allocate the desired power by activating the required grids based on the sensing information achieved with the VAE-based mechanism. Simulation results demonstrate the superior performance of the proposed AI framework compared to advantage actor-critic and deep Q-network-based methods, achieving a cumulative average SINR ${_{c}}$ improvement of 8.5 dB and 10.27 dB, and a cumulative average achievable rate improvement of 21.59 bps/Hz and 4.22 bps/Hz, respectively. Therefore, our proposed AI-driven framework guarantees efficient power allocation for holographic beamforming through ISAC schemes leveraging AoS. | 10.1109/TNSM.2026.3654889 |
| Shagufta Henna, Upaka Rathnayake | Hypergraph Representation Learning-Based xApp for Traffic Steering in 6G O-RAN Closed-Loop Control | 2026 | Vol. 23, Issue | Open RAN Resource management Ultra reliable low latency communication Throughput Heuristic algorithms Computer architecture Accuracy 6G mobile communication Seals Real-time systems Open radio access network (O-RAN) intelligent traffic steering link prediction for traffic management | This paper addresses the challenges in resource allocation within disaggregated Radio Access Networks (RAN), particularly when dealing with Ultra-Reliable Low-Latency Communications (uRLLC), enhanced Mobile Broadband (eMBB), and Massive Machine-Type Communications (mMTC). Traditional traffic steering methods often overlook individual user demands and dynamic network conditions, while multi-connectivity further complicates resource management. To improve traffic steering, we introduce Tri-GNN-Sketch, a novel graph-based deep learning approach employing Tri-subgraph sampling to enhance link prediction in Open RAN (O-RAN) environments. Link prediction refers to accurately forecasting optimal connections between users and network resources using current and historical measurements. Tri-GNN-Sketch is trained on real-world 4G/5G RAN monitoring data. The model demonstrates robust performance across multiple metrics, including precision, recall, F1 score, and ROC-AUC, effectively modeling interfering nodes for accurate traffic steering. We further propose Tri-HyperGNN-Sketch, which extends the approach to hypergraph modeling, capturing higher-order multi-node relationships. Using link-level simulations based on Channel Quality Indicator (CQI)-to-modulation mappings and LTE transport block size specifications, we evaluate throughput and packet delay for Tri-HyperGNN-Sketch. Tri-HyperGNN-Sketch achieves an exceptional link prediction accuracy of 99.99% and improved network-level performance, including higher effective throughput and lower packet delay compared to Tri-GNN-Sketch (95.1%) and other hypergraph-based models such as HyperSAGE (91.6%) and HyperGCN (92.31%) for traffic steering in complex O-RAN deployments. | 10.1109/TNSM.2026.3654534 |
| Jian Ye, Lisi Mo, Gaolei Fei, Yunpeng Zhou, Ming Xian, Xuemeng Zhai, Guangmin Hu, Ming Liang | TopoKG: Infer Internet AS-Level Topology From Global Perspective | 2026 | Vol. 23, Issue | Business Topology Routing Internet Knowledge graphs Accuracy Network topology Probabilistic logic Inference algorithms Border Gateway Protocol AS-level topology business relationship hierarchical structure knowledge graph global perspective | Internet Autonomous System (AS) level topology includes AS topology structure and AS business relationships, describes the essence of Internet inter-domain routing, and is the basis for Internet operation and management research. Although the latest topology inference methods have made significant progress, those relying solely on local information struggle to eliminate inference errors caused by observation bias and data noise due to their lack of a global perspective. In contrast, we not only leverage local AS link features but also re-examine the hierarchical structure of Internet AS-level topology, proposing a novel inference method called topoKG. TopoKG introduces a knowledge graph to represent the relationships between different elements on a global scale and the business routing strategies of ASes at various tiers, which effectively reduces inference errors resulting from observation bias and data noise by incorporating a global perspective. First, we construct an Internet AS-level topology knowledge graph to represent relevant data, enabling us to better leverage the global perspective and uncover the complex relationships among multiple elements. Next, we employ knowledge graph meta paths to measure the similarity of AS business routing strategies and introduce this global perspective constraint to infer the AS business relationships and hierarchical structure iteratively. Additionally, we embed the entire knowledge graph upon completing the iteration and conduct knowledge inference to derive AS business relationships. This approach captures global features and more intricate relational patterns within the knowledge graph, further enhancing the accuracy of AS-level topology inference. Compared to the state-of-the-art methods, our approach achieves more accurate AS-level topology inference, reducing the average inference error across various AS link types by up to 1.2 to 4.4 times. | 10.1109/TNSM.2026.3652956 |
| Xiaofeng Liu, Naigong Zheng, Fuliang Li | Don’t Let SDN Obsolete: Interpreting Software-Defined Networks With Network Calculus | 2026 | Vol. 23, Issue | Delays Calculus Analytical models Optimization Kernel Queueing analysis Table lookup Quality of service Mathematical models Data centers Software-defined networking network calculus delay analysis performance optimization | Although Software-Defined Network (SDN) has gained popularity in real-world deployments for its flexible management paradigm, its centralized control principle leads to various known performance issues. In this paper, we propose SDN-Mirror, a novel generalized delay analytical model based on network calculus, to interpret how the performance is affected and to illustrate how to accelerate the performance as well. We first elaborate the impact of parameters on packet forwarding delay in SDN, including device capacity, flow features and cache size. Then, building upon the analysis, we establish SDN-Mirror, which acts like a mirror, capable of not only precisely representing the relation between packet forwarding delay and each parameter but also verifying the effectiveness of optimization policies. At last, we evaluate SDN-Mirror by quantifying how each parameter affects the forwarding delay under different table matching states. We also verify a performance improvement policy with the optimized SDN-Mirror and experiment results show that packet forwarding delays of kernel space matching flow, userspace matching flow and unmatched flow can be reduced by 39.8%, 20.7% and 13.2%, respectively. | 10.1109/TNSM.2026.3655704 |