Last updated: 2026-04-27 05:01 UTC
All documents
Number of pages: 162
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Guisong Yang, Yechao Huang, Panxing Huang, Xingyu He | A Distributed SDN Controller-Based Computing Framework for Effective in-orbit Computing | 2026 | Early Access | Low earth orbit satellites Artificial satellites Aerospace and electronic systems Telemetry Antennas Antennas and propagation Central Processing Unit Software defined networking Computer networks Communication systems Task Scheduling Software Defined Network Satellite Network Placement of SDN Controller | The rapid development of Low Earth Orbit (LEO) satellite networks has made in-orbit computing more feasible, offering a solution for processing real-time, diverse user tasks. Compared with traditional cloud computing in ground cloud computing center, directly computing on the LEO satellite can significantly reduce task-processing delay. However, challenges remain, including the limited sensing and computing capabilities of satellites, high delays in processing task requests, and frequent switching of control domains due to the relative movement between LEO satellites and nodes in other orbits. To address these challenges and improve task management, computing is treated as a Virtual Network Function (VNF), managed by Software-Defined Networking (SDN) controllers. This paper proposes a distributed SDN controller-based computing framework, where task information is forwarded to SDN controllers, which then use a task scheduling strategy to allocate tasks to suitable computing nodes for processing. To support the implementation of this framework, we first propose a heuristic SDN controller placement strategy that uses a tiling method to divide the LEO satellite network into SDN control domains and places the controller at the midpoint of each domain Then, we propose a Double Deep Q-Network (DDQN) algorithm for in-orbit task scheduling, which adaptively optimizes task scheduling strategy to minimize task-processing delay and ensure a high task completion rate. Finally, Simulations are conducted in two parts to evaluate the framework. The first part validates the DDQN-based task scheduling strategy, achieving significant reductions in task-processing delay and improved task completion rates compared to conventional strategies. The second part assesses the impact of SDN control domain shape and size on task-processing delay, confirming domain size as the dominant factor influencing delay. | 10.1109/TNSM.2026.3685308 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Zhenzhen Yan, Lizhi Peng, Peiqiang Liu, Yingshuo Bao, Bo Yang | NT-Transformer: A Non-Pretrained Encrypted Network Traffic Classification Model | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Antennas Motion pictures Communication systems Internet of Things Telecommunication traffic Computer networks encrypted network traffic classification Transformers byte representation uni-gram pre-training deep learning | Network traffic classification plays an indispensable role in network management, Quality of Service (QoS), and cybersecurity. With the widespread encryption techniques applied to network traffic, it has become increasingly challenging to classify network traffic into different management groups accurately. In recent years, pre-training Transformer-based models have been successfully applied to Natural Language Processing (NLP), and researchers have also introduced such models into encrypted network traffic analysis. However, besides the similarities of words in NLP and byte codes in network traffic, there exist essential differences between them, which may cause inefficacy of the pretrained model when being applied to new traffic data. In this paper, we propose a non-pretrained encrypted network traffic classification model based on Transformer called NT-Transformer, which can directly learn labeled network traffic features at two levels of granularity, namely, byte level (uni-gram or bi-gram) and flow level (packet size and packet inter-arrival time), without the relatively expensive pre-training procedure of unlabeled data. This method is validated on three public datasets and three sets of recently collected network traffic data. Experimental results indicate that in some scenarios, pretrained models offer limited performance gains when applied to new encrypted network traffic data not encountered during pretraining, and NT-Transformer with uni-gram byte representation outperforms the state-of-the-art models in terms of pushing the F1 score up by 0.25% - 2.24%. | 10.1109/TNSM.2026.3683410 |
| Md Arif Hassan, Bui Duc Manh, Cong T. Nguyen, Chi-Hieu Nguyen, Dinh Thai Hoang, Diep N. Nguyen, Nguyen Van Huynh, Dusit Niyato | SBW 3.0: A Blockchain-Enabled Framework for Secure and Efficient Information Management in Web 3.0 | 2026 | Early Access | Jamming Protocols Semantic Web Smart contracts Consensus protocol Internet Communication systems Internet of Things Computer networks Web 2.0 Web 3.0 blockchain delegated proof-of-stake smart contract game theory non-cooperative game | In this paper, we propose an effective blockchain-enabled information management framework, named Smart Blockchain-based Web 3.0 (SBW 3.0). Our framework aims to handle information within Web 3.0 efficiently, enhance data security and privacy, create new revenue streams, and encourage users to contribute valuable information to websites. To this end, SBW 3.0 employs blockchain technology and smart contracts to manage the decentralized data collection in Web 3.0. Moreover, we introduce a robust consensus mechanism grounded in Delegated Proof-of-Stake (DPoS) to reward user contributions. Furthermore, we develop a non-cooperative game model to examine user behavior in this context and conduct thorough analysis to prove the uniqueness of the Nash equilibrium in our proposed system. Through simulations, we evaluate the performance of SBW 3.0 and analyze the effects of various critical parameters on information contribution. Our results validate the theoretical analysis, showing that the proposed consensus mechanism successfully encourages nodes and users to provide more information, thus overcoming the current limitations of Web 3.0 regarding data decentralization and management. | 10.1109/TNSM.2026.3683881 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Wangqing Luo, Jinbin Hu, Hua Sun, Pradip Kumar Sharma, Jin Wang | SALB: Security-Aware Load Balancing for Large Language Model Training in Datacenter Networks | 2026 | Early Access | Training Load management Packet loss Throughput Delays Topology Scheduling Telecommunication traffic Fluctuations Switches Datacenter Networks Load Balancing Data Security Deep Reinforcement Learning | To meet the massive compute and high-speed communication demands of Large Language Model (LLM) training, modern datacenters typically adopt multipath topologies such as Fat-Tree and Clos to host parallel jobs across hundreds to thousands of GPUs. However, LLM training exhibits periodic, high-bandwidth communication patterns. Existing load-balancing schemes become misaligned under dynamic congestion and anomalous surges: they struggle to promptly mitigate iteration-peak congestion and lack effective isolation of anomalous traffic. To address this, we propose Security-Aware Load Balancing (SALB) for LLM training. SALB leverages a Deep Reinforcement Learning (DRL) controller with queue and delay signals for packet-level multipath load balancing and employs path binding to confine suspicious flows. By integrating data security into load balancing, SALB simultaneously achieves high throughput and robust traffic isolation. NS-3 simulation results show that, compared with CONGA, Hermes, and ConWeave, SALB reduces the 99th-percentile flow completion time (FCT) of short flows by an average of 65% and increases the throughput of long flows by an average of 54%. It further outperforms the baselines in aggregate throughput, path utilization, and packet loss rate, thereby significantly enhancing system stability, robustness, and data security. | 10.1109/TNSM.2026.3678979 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Yu Gu, Le Zhang, Yunyi Zhang, Ye Du | SatFedGuard: Semi-Supervised Federated Contrastive Learning with RL-Assisted Bidirectional Distillation for Anomaly Traffic Detection in Satellite Networks | 2026 | Early Access | Low earth orbit satellites Artificial satellites Payloads Jamming Electronic warfare Feeds Broadcasting Broadcast technology Filtering Filters Federated Learning Satellite Network Intrusion Detection Semi-Supervised Learning Edge-Cloud Collaboration | Federated learning-based intrusion detection methods for satellite networks enable model training without sharing local data, thereby ensuring network security while significantly reducing communication overhead. However, due to the difficulty of obtaining large-scale high-quality labeled data in satellite environments, a key challenge lies in how to train intrusion detection models using abundant unlabeled traffic data. We propose SatFedGuard, a semi-supervised federated contrastive learning approach for anomaly traffic detection in satellite networks. SatFedGuard effectively integrates unlabeled in-orbit data with labeled data from ground stations for model training. First, it models the unlabeled satellite traffic data using a contrastive learning framework. To address the challenge of non-IID data distribution, an attention-based dual-path aggregation strategy is designed to generate personalized models for each satellite by leveraging model similarities. Then, a bidirectional multi-granularity distillation method between larger and smaller models is implemented, where reinforcement learning is employed to optimize the weights of different loss terms dynamically. Experiments on two satellite network traffic datasets under non-IID settings demonstrate that the proposed method significantly improves anomaly detection performance while reducing dependence on in-orbit labeled data, achieving F1-Scores of 93.38% (↑11.63%) and 99.80% (↑8.72%), respectively. | 10.1109/TNSM.2026.3685416 |
| Alba Jano, Serkut Ayvaşik, Yash Deshpande, Wolfgang Kellerer | QUEST: User-Based Quality of Service Aware Uplink Resource Scheduling | 2026 | Early Access | Payloads Military aircraft Space technology Omnidirectional antennas Broadcasting Feedback Circuits Semiconductor lasers Central Processing Unit Semiconductor optical amplifiers Radio resource management quality of service user context user satisfaction energy efficiency IoTs | Efficient radio resource management (RRM) in 5G networks is increasingly challenged by the diverse quality of service (QoS) requirements of emerging applications and the growing uplink (UL) traffic from resource-constrained devices. Existing scheduling approaches often lack user and service-specific context, limiting their ability to guarantee timely and energy-efficient data transmission, particularly critical for the internet of things (IoT) and mission-critical services. In this work, we introduce QUEST, a QoS-aware UL scheduling framework that exploits the 5G QoS model alongside network and device context to efficiently allocate radio resources. Designed and evaluated in an indoor factory environment, QUEST supports users with various heterogeneous 5QI services under dynamic multi-user conditions. Evaluation results, validated through both real-world measurements and 3GPP-compliant simulations, show that QUEST consistently outperforms traditional channel- and QoS-aware schedulers. It improves QoS compliance, reduces packet drops and serving time, and enhances energy efficiency. For users with stringent QoS demands, measurements show a 13% increase in successfully transmitted packets and a 6.2% reduction in delay for 50% of transmissions, compared to the best-performing baseline. Benchmarking against an optimal scheduler shows that QUEST achieves the closest performance among baselines, while maintaining low complexity, making it a practical and scalable solution for 5G and beyond UL RRM. | 10.1109/TNSM.2026.3685537 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Li-Chin Siang, Wen-Hsing Kuo, Pei-Chieh Lin, Chih-Wei Huang, De-Nian Yang | FoV Prediction-Based Adaptive Streaming Mechanism for 6DoF Volumetric MR Applications in Multi-Base-Station Networks | 2026 | Early Access | Payloads Antennas Feeds Antennas and propagation Broadcasting Broadcast technology Kalman filters Filters Central Processing Unit Circuits and systems femto-cells resource allocation layer encoding 360-degree video streaming | The emergence of mixed reality (MR) as a significant application in mobile networks has garnered significant attention. Wireless headsets enable unrestricted user movement within femtocell networks comprising numerous small base stations, offering a promising solution for MR applications. However, the complexity of these systems poses challenges in optimizing resource allocation across base stations. This paper proposes a novel resource allocation method for volumetric MR streaming in multi-base-station environments. The method consists of two phases. Firstly, the method uses neural networks to model and forecast users’ viewing directions. Leveraging these predictions, their confidence levels, and layer characteristics, the algorithm adjusts video quality for each user and allocates transmission resources across base stations to optimize overall performance. Through comprehensive analysis, we prove that this novel problem is NP-hard and show that our approach achieves a performance within a bounded gap from the optimal solution. Simulation results reveal that our proposed algorithm outperforms existing techniques, enhancing aggregate performance across diverse scenarios. | 10.1109/TNSM.2026.3685670 |
| Jingyu Gan, Chen Guo, Chongxiang Yao | Construction and Post-Failure Reconstruction of Virtual Backbone Based on Regional Risk Difference in Wireless Sensor Networks | 2026 | Early Access | Broadcasting Broadcast technology Radio broadcasting Radio networks Communication systems Wireless sensor networks Computer networks Routing Wide area networks Network topology Wireless sensor network virtual backbone connected dominating set regional risk difference | In wireless sensor networks (WSNs), virtual backbones (VBs) are widely employed to address issues such as energy constraints and broadcast storms. WSNs are typically modeled as unit disk graphs (UDGs); a VB for data transmission is determined based on the construction of a connected dominating set (CDS) in the graph. Since sensor nodes may fail due to accidental damage or energy depletion, it is necessary to construct a CDS with fault tolerance. In fact, under the influence of complex terrain, significant altitude differences, and environmental perturbations caused by multiple factors, application scenarios frequently have significant differences in failure risk between nodes in different regions. Based on this observation, we optimize the network structure by constructing different CDS types in regions with varying risk factors, introducing the concept of a regional risk difference connected dominating set (RRD-CDS) tailored for heterogeneous hazard levels. In this paper, we enhance network robustness by constructing (k,m)-CDS in high-risk regions, while reducing the number of CDS nodes by building a global (1, 1)-CDS for other regions, thereby designing the RRD-CDS algorithm. When failures cause the RRD-CDS to lose its properties as a CDS, we design a reconstruction algorithm to restore the fault tolerance of RRD-CDS. Simulation results verify the effectiveness of both the RRD-CDS construction algorithm and the RRD-CDS reconstruction algorithm. | 10.1109/TNSM.2026.3686606 |
| Faissal Ahmadou, Boubakr Nour, Makan Pourzandi, Mourad Debbabi, Chadi Assi | Automating Threat-Aligned Testflows Generation using Ontology-Grounded RAG from CTI Reports | 2026 | Early Access | Radio broadcasting Frequency modulation System-on-chip Filtering Circuits Feedback Filters Integrated circuits MIMICs Millimeter wave integrated circuits Cybersecurity Security Automation Testflow Generation Retrieval-Augmented Generation | The increasing sophistication and complexity of Advanced Persistent Threats (APTs) pose significant challenges to security practitioners. To proactively protect against these threats, security practitioners rely on the generation of testflows, structured sequences of actions designed to verify whether the tactics and behaviors of an APT are present within their organization. However, manually creating such testflows is time-consuming, error-prone, and highly dependent on expert knowledge. Moreover, existing automated approaches suffer from several limitations, including validity, efficiency, and insufficient domain adaptation. To address these challenges, this paper introduces CTI-RAGFlow, to automate the generation of relevant, valid, and effective testflows from unstructured threat reports tailored to specific organizational environments. CTI-RAGFlow introduces three key contributions: (i) a dual-ontology approach, that integrates both a system ontology representing the operational environment and a cybersecurity ontology capturing adversary tactics, techniques, and procedures, improving the precision and accuracy of generated testflows; (ii) a fact-based context retrieval mechanism that combines a hypergraph structured knowledge base with a Retrieval-Augmented Generation pipeline using Large Language Models; and (iii) a fully automated testflow generation process that minimizes manual effort, reduces human error, and facilitates the generation of valid testflow. We evaluate CTI-RAGFlow against three widely used LLM models (e.g., base and fine-tuned models) using publicly available CTI reports for three well-known APTs (e.g., APT41, APT29, APT28). The results show that CTI-RAGFlow outperforms the baselines in terms of semantic relevance, coverage, validity, and effectiveness in verifying multi-stage cyberattack scenarios. | 10.1109/TNSM.2026.3684808 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Xiujun Xu, Qi Wang, Qingshan Wang, Yinlong Xu | Contract-Based Incentive Mechanism for Long-Term Participation in Federated Learning | 2026 | Vol. 23, Issue | Contracts Data models Computational modeling Costs Training Optimization Games Artificial intelligence Accuracy Privacy Federated learning long-term contract reputation incentive mechanism contract theory | Federated learning (FL), as a newly-developing technique, brings the advantage of organizing multiple participants to learn together, while avoiding the leakage of their privacy information. Contract theory provides an effective incentive mechanism to encourage participants to participate in FL. Existing contract-based incentive mechanisms consider participants’ types but ignore the different contributions of participants within the same type during the training. This paper first introduces a metric, reputation, to evaluate the contribution of participants in each iteration, and then proposes a hybrid contract mechanism consisting of a short-term contract and a long-term contract. Only the participants with reputations higher than a pre-defined threshold can sign the long-term contract. We formulate the solution of the long-term contract mechanism as an optimization problem with constraints. We further simplify the constraints of the long-term contract optimization problem, and theoretically analyze the correctness of the simplification to greatly reduce its computational complexity. We prove that the model owner achieves more profit with the hybrid contract mechanism. Simulations with the MNIST dataset show that the long-term contract improves the model accuracy by at least 5% compared with the existing contracts. Furthermore, compared with the short-term contract, participants signing the long-term contract are granted more rewards. | 10.1109/TNSM.2026.3657419 |
| Haoran Hu, Huazhi Lun, Ya Wang, Zhifeng Deng, Jiahao Li, Yuexiang Cao, Ying Liu, Heng Zhang, Jie Tang, Huicun Yu, Jiahua Wei, Xingyu Wang, Lei Shi | Effective Resource Scheduling Design for Concurrent Competing Requests in Quantum Networks | 2026 | Vol. 23, Issue | Purification Quantum networks Quantum entanglement Throughput Damping Scheduling Routing Resource management Qubit Noise Quantum networks resource scheduling concurrent competing requests entanglement fidelity | Quantum networks, as a pivotal platform to support numerous quantum applications, have the potential to far exceed traditional communication networks. Establishing end-to-end entanglement connections with guaranteed fidelity is a key prerequisite for realizing the functionality of quantum networks. Entanglement purification techniques are commonly used in the entanglement distribution process to provide end-to-end entanglement connections that meet the fidelity requirements. Since the purification operation sacrifices a certain amount of entanglement resources, it is critical and challenging to efficiently utilize the scarce entanglement resources in quantum networks with concurrent competing requests. To address this problem, we propose a novel demand-oriented resource scheduling (DRS) algorithm. Considering the overall network demand, DRS introduces a congestion factor to evaluate the resource demand of each link, and performs purification operations sequentially based on the congestion level of the links, thus avoiding the excessive consumption of entanglement resources of bottleneck links. Extensive simulation results show that the DRS algorithm can achieve higher network throughput with similar resource conversion rates compared to traditional resource allocation schemes. Our work provides a new scheme for the resource scheduling problem under concurrent competing requests, which can promote the further development of existing entanglement routing techniques. | 10.1109/TNSM.2026.3651862 |
| Jack Wilkie, Hanan Hindy, Craig Michie, Christos Tachtatzis, James Irvine, Robert Atkinson | A Novel Contrastive Loss for Zero-Day Network Intrusion Detection | 2026 | Vol. 23, Issue | Contrastive learning Anomaly detection Training Autoencoders Training data Detectors Data models Vectors Telecommunication traffic Network intrusion detection Internet of Things network intrusion detection machine learning contrastive learning | Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e., other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches. | 10.1109/TNSM.2026.3652529 |
| Xinshuo Wang, Lei Liu, Baihua Chen, Yifei Li | ENCC: Explicit Notification Congestion Control in RDMA | 2026 | Vol. 23, Issue | Bandwidth Data centers Heuristic algorithms Accuracy Throughput Hardware Switches Internet Convergence Artificial intelligence Congestion control RDMA programmable switch FPGA | Congestion control (CC) is essential for achieving ultra-low latency, high bandwidth, and network stability in high-speed networks. However, modern high-performance RDMA networks, crucial for distributed applications, face significant performance degradation due to limitations of existing CC schemes. Most conventional approaches rely on congestion notification signals that must traverse the queuing data path before congestion signals can be sent back to the sender, causing delayed responses and severe performance collapse. This study proposes Explicit Notification Congestion Control (ENCC), a novel high-speed CC mechanism that achieves low latency, high throughput, and strong network stability. ENCC employs switches to directly notify the sender of precise link load information and avoid notification signal queuing. This allows precise sender-side rate control and queue regulation. ENCC also ensures fairness and easy deployment in hardware. We implement ENCC based on FPGA network interface cards and programmable switches. Evaluation results show that ENCC achieves substantial throughput improvements over representative baseline algorithms, with gains of up to $16.6\times $ in representative scenarios, while incurring minimal additional latency. | 10.1109/TNSM.2026.3656015 |
| Pieter Moens, Bram Steenwinckel, Femke Ongenae, Bruno Volckaert, Sofie Van Hoecke | Toward Context-Aware Anomaly Detection for AIOps in Microservices Using Dynamic Knowledge Graphs | 2026 | Vol. 23, Issue | Microservice architectures Monitoring Anomaly detection Benchmark testing Knowledge graphs Costs Topology Real-time systems Scalability Observability Anomaly detection microservices knowledge graph dynamic graphs knowledge graph embedding AIOps context | Microservice applications are omnipresent due to their advantages, such as scalability, flexibility and consequentially resource cost efficiency. The loosely-coupled microservices can be easily added, replicated, updated and/or removed to address the changing workload. However, the distributed and dynamic nature of microservice architectures introduces a complexity with regard to monitoring and observability, which is paramount to ensure reliability, especially in critical domains. Anomaly detection has become an important tool to automate microservice monitoring and detect system failures. Nevertheless, state-of-the-art solutions assume the topology of the monitored application to remain static over time and fail to account for the dynamic changes the application, and the infrastructure it is deployed on, undergoes. This paper tackles these shortcomings by introducing a context-aware anomaly detection methodology using dynamic knowledge graphs to capture contextual features which describe the evolving state of the monitored system. Our methodology leverages resource and network monitoring to capture dependencies between microservices, and the infrastructure they are running on. In addition to the methodology for anomaly detection, this paper presents an open-source benchmark framework for context-aware anomaly detection that includes monitoring, fault injection and data collection. The evaluation on this benchmark shows that our methodology consistently outperforms the non-contextual baselines. These results underscore the importance of contextual awareness for robust anomaly detection in complex, topology-driven systems. Beyond these achieved improvements, our benchmark establishes a reproducible and extensible foundation for future research, facilitating the experimentation with broader ranges of models and a continued advancement in context-aware anomaly detection. | 10.1109/TNSM.2026.3652304 |
| Xiaofeng Liu, Naigong Zheng, Fuliang Li | Don’t Let SDN Obsolete: Interpreting Software-Defined Networks With Network Calculus | 2026 | Vol. 23, Issue | Delays Calculus Analytical models Optimization Kernel Queueing analysis Table lookup Quality of service Mathematical models Data centers Software-defined networking network calculus delay analysis performance optimization | Although Software-Defined Network (SDN) has gained popularity in real-world deployments for its flexible management paradigm, its centralized control principle leads to various known performance issues. In this paper, we propose SDN-Mirror, a novel generalized delay analytical model based on network calculus, to interpret how the performance is affected and to illustrate how to accelerate the performance as well. We first elaborate the impact of parameters on packet forwarding delay in SDN, including device capacity, flow features and cache size. Then, building upon the analysis, we establish SDN-Mirror, which acts like a mirror, capable of not only precisely representing the relation between packet forwarding delay and each parameter but also verifying the effectiveness of optimization policies. At last, we evaluate SDN-Mirror by quantifying how each parameter affects the forwarding delay under different table matching states. We also verify a performance improvement policy with the optimized SDN-Mirror and experiment results show that packet forwarding delays of kernel space matching flow, userspace matching flow and unmatched flow can be reduced by 39.8%, 20.7% and 13.2%, respectively. | 10.1109/TNSM.2026.3655704 |