Last updated: 2026-05-16 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Minh-Thuyen Thi, Mohan Gurusamy | Multi-dimensional Cross-granularity Open-set Network Intrusion Detection | 2026 | Early Access | Modeling Labeling Distance measurement Signal detection Optimization Fluid flow Training Intrusion detection Magnesium Tensors Network intrusion detection out-of-distribution detection optimal transport multi-granularity analysis | Network intrusion detection systems (NIDSs) face critical challenges from continuously evolving cyber-attacks. Traditional machine learning methods, while requiring extensive labeled training data, still often fail against unknown and out-of-distribution (OOD) attacks. Furthermore, new sophisticated adversaries are exploiting the detection blind spots inherent in traditional feature representation approaches that do not provide adequate comprehensive traffic analysis. In this paper, we propose MDCG-IDS, an NIDS framework that introduces multi-dimensional cross-granularity (MDCG) feature representation for open-set detection, in which network traffic is analyzed thoroughly across three complementary dimensions (traffic statistics, temporal, spatial), each at multiple granularity levels. These dimensions and granularities jointly capture the structures of sophisticated attacks that may be invisible from single analytical perspectives. We design a tensor structure that provides a unified encoding for the MDCG features while supporting the use of optimal transport theory to measure the distance between benign traffic and known or unknown attacks. MDCG-IDS uses a semi-supervised learning model that is trained exclusively on benign traffic and validated on a small set of labeled data, significantly reducing the effort of data labeling. Experiments on various datasets achieve AUC-ROC scores of more than 0.948, exceeding the best competing state-of-the-art methods by up to 7%. Regarding the amount of labeled validating data, MDCG-IDS obtains an AUC-ROC score of over 0.94 with only 3% of entire validating samples, outperforming the baseline models. | 10.1109/TNSM.2026.3693141 |
| Md Facklasur Rahaman, Makhduma F. Saiyed, Irfan Al-Anbagi, Ramakrishna Gokaraju | A Domain-informed Hierarchical Federated Learning Framework for DDoS Detection in WSN for Critical Infrastructure | 2026 | Early Access | Modeling Internet of Things Signal detection Federated learning Accuracy Inductors Image sensors Timing Training Architecture Wireless Sensor Networks (WSN) Small Modular Reactor (SMR) Distributed IoT sensors Federated Learning LSTM Hierarchical Aggregation DDoS Attack Detection Domain-Informed LSTM Trust-Aware Systems | The deployment of Wireless Sensor Networks (WSN) in critical infrastructure, such as Small Modular Reactors (SMRs), faces cybersecurity threats like Distributed Denial of Service (DDoS) attacks that can overload these networks and disrupt monitoring and control functions. Current DDoS detection systems often suffer from high false positive rates, neglect domain-specific operational constraints, and rely on centralized architectures that pose privacy risks, making them less suitable for distributed Internet of Things (IoT) environments. To address these issues, we propose a novel Domain-informed Hierarchical Federated Learning (DHFL) framework for WSN used in SMR monitoring and control applications. Our framework features a dual-branch bidirectional Long Short-Term Memory (LSTM) architecture comprising of two parallel processing branches with network-specific constraints, facilitating precise detection of DDoS attacks. It includes differentiable penalty functions to enforce domain-aligned behaviour and employs adaptive trust scoring to evaluate the reliability of individual nodes. These elements operate within a hierarchical Federated Learning (FL) structure organized into three tiers: sensor nodes, local aggregators, and a global coordinator, allowing collaborative training that preserves privacy. Unlike earlier approaches, our method not only maintains privacy by ensuring that raw sensor data never leaves the local nodes and only model updates are shared but also considers the operational importance and trustworthiness of each node through tier-weighted aggregation. Tested on the CICIoT2023 dataset, our system achieved 93.4% accuracy, 94.5% precision, 97.5% recall, 95.5% F1-score, and 98.9% AUC, surpassing state-of-the-art FL methods in both performance and efficiency. Furthermore, it converged in fewer communication rounds (30–50) with reduced communication costs (from 45 MB to 30 MB per round). Our framework can differentiate between normal reactor transients and actual attacks, making it suitable for mission-critical SMR cybersecurity. | 10.1109/TNSM.2026.3693112 |
| Atri Mukhopadhyay, Dinesh Korukonda, Goutam Das | Design of Passive Optical Network Based O-RAN X-haul: A Systematic Approach | 2026 | Early Access | Timing Passive optical networks Optimization Delays Optical network units Ethernet Jitter Loading Copper Synchronization C-RAN Delay Jitter QCQP O-RAN PON | The development of high data rate communication technologies has resulted in cell densification, which in turn has led to the development of centralized radio access networks (C-RANs) followed by open radio access networks (O-RANs). The O-RAN segregates the base station into three logical entities; the central unit (CU), the distributed unit (DU) and the radio unit (RU). The CU, DU and RU require low latency, low jitter and high data rate connections for seamless operation, which is known as X-haul. A passive optical network (PON) is a potential solution for X-haul design. However, conventional PON uplink protocols are not inherently suitable for X-haul requirements. The packetization procedure of PON introduces jitter to the X-haul bit stream. Further, the delay requirements of the X-haul limit the number of sources that can be connected to the X-haul. Advanced features like coordinated multipoint requires synchronization among the different X-haul bit streams as well. Therefore, in this paper, we develop an optimal uplink system that allows PON to be used as an X-haul connection technology. The proposal maximizes the throughput of the PON while conforming to the delay and synchronization requirements. Moreover, the proposal nullifies the jitter introduced by the PON scheduler. We have performed extensive simulations for verifying our results. | 10.1109/TNSM.2026.3692242 |
| Jiale Zhu, Xiaoyao Zheng, Shukai Ye, Ming Zheng, Liping Sun, Liangmin Guo, Qingying Yu, Yonglong Luo | Federated Recommendation Model Based on Personalized Attention and Privacy-Preserving Dynamic Graph | 2026 | Early Access | Modeling Federated learning Privacy Recommender systems Training Educational institutions Servers Algorithms Conferences Graph neural networks Graph Neural Networks Federated Learning Personalized Recommendation Privacy Protection | Graph Neural Networks (GNNs) have been widely adopted in recommendation systems. When integrated into a federated learning framework, GNNs can enhance the model’s expressive capability. However, challenges arise in personalized representation and graph expansion due to the heterogeneity and locality of user data in federated recommendation systems. To address these challenges, we propose a federated recommendation model based on personalized attention and privacy-preserving dynamic graphs. The method first matches neighbor users for each selected client. Subsequently, it counts the interaction frequencies of items for both local and neighbor users to construct personalized weights, which captures the unique characteristics of different users. Additionally, we designs a method for constructing privacy-preserving dynamic graphs. In each round of federated training, the selected client adds pseudo-interaction items to its own interaction subgraph, perturbing the real interactions. After completing local training, the noisy interaction subgraph is incorporated into the global graph to capture higher-order connectivity information among users while safeguarding their interaction privacy. We conduct extensive experiments on three benchmark datasets, and the results demonstrate that the proposed PADG method achieves superior performance while effectively protecting privacy. | 10.1109/TNSM.2026.3691659 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Aerospace and electronic systems Telemetry Central Processing Unit Microcontrollers Microprocessors MIMICs Millimeter wave integrated circuits Monolithic integrated circuits Communication systems Internet of Things EDHOC OSCORE Key agreement Authentication extended Canetti–Krawczyk (eCK) attack model | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 |
| Dinghao Zeng, Fagui Liu, Runbin Chen, Jingwei Tan, Dishi Xu, Qingbo Wu, C.L. Philip Chen | CoreScaler: A Resource-Efficient Hybrid Scaling Framework for Dynamic Workloads in Cloud | 2026 | Early Access | Resource management Central Processing Unit Memory Optimization Modeling Timing Clouds Conferences Algorithms Loading Cloud computing microservices hybrid autoscaling resource management | Containerized microservices face significant challenges in balancing service quality and resource efficiency under dynamic workloads. Existing approaches suffer from horizontal scaling’s cold start latency, vertical scaling’s resource ceilings, and hybrid methods’ limited adaptability. We present CoreScaler, a resource-efficient hybrid scaling framework based on analysis of CPU usage patterns revealing substantial consumption differences between working mode and waiting mode instances. This insight drives our dual-mode instance management model that distinguishes between working instances actively handling requests and waiting instances maintaining hot standby with minimal resource allocation. CoreScaler employs a master-subordinate distributed architecture where the master node performs capacity planning using multi-confidence interval predictions and contextual multi-armed bandit optimization, while subordinate nodes execute mode-aware CPU quota adjustments. Comprehensive evaluation on a Kubernetes cluster with a typical microservice system under four representative production work-loads demonstrates that CoreScaler maintains SLO compliance while reducing CPU and memory allocation by 22.53% and 30.83% respectively compared to state-of-the-art solutions. The framework achieves substantially higher resource utilization than single-dimension scaling approaches, validating the effectiveness of coordinated hybrid scaling for dynamic cloud environments. | 10.1109/TNSM.2026.3692955 |
| Jiahang Pu, Hongyu Ye, Jing Cheng, Feng Shan, Runqun Xiong | Balancing Timeliness and Accuracy: A Hybrid Data-Control Plane Framework for Volumetric DDoS Defense in IoT | 2026 | Early Access | Modeling Internet of Things Planing Signal detection Fluid flow Timing Denial-of-service attack IP networks Distributed denial-of-service attack Switches Distributed denial-of-service attack Attack detection Attack defense P4 Deep Learning | Resource-constrained IoT devices in Industrial Internet environments are highly vulnerable to DDoS attacks due to infrequent security updates and insufficient built-in protection mechanisms. Existing defense solutions primarily rely on external filtering servers or programmable switches, but these approaches fail to simultaneously meet the stringent real-time performance and high accuracy requirements of industrial applications. To address these limitations, we propose a novel cross-plane defense framework that exploits the temporal invariance characteristics of attack traffic patterns. In the data plane, an adaptive variance threshold mechanism immediately mitigates high-volume, low-variance traffic flows, while a bidirectional dual-hash table captures low-collision flow features for efficient export to the control plane. The control plane constructs temporally-enhanced flow sequences that enable deep learning models to perform accurate attack detection, subsequently directing the data plane to block identified malicious sources. We implemented and evaluated a prototype of this framework on a software switch platform using both real-world attack datasets and custom-generated traffic patterns. Experimental results demonstrate that our framework successfully mitigates 86% of attack traffic within milliseconds and achieves complete source blocking within 52 seconds. Compared to baseline methods, our framework can effectively counter both DoS and DDoS attacks without generating false positives on benign traffic. | 10.1109/TNSM.2026.3693266 |
| Xingyu He, Nianci Li, Panxing Huang, Chunhua Gu, Guisong Yang, Yunhuai Liu | Dynamic Spatiotemporal Dual-Encoder Transformer for Long-Term Traffic Prediction in LEO Satellite Networks | 2026 | Early Access | Satellites Modeling Low earth orbit satellites Timing Topology Matrices Sequences Sequential analysis Transformers Design methodology LEO Satellite Networks Traffic Prediction Spatiotemporal Modeling Long-term Prediction Transformer | Accurate long-term traffic prediction in Low Earth Orbit (LEO) satellite networks is essential for proactive resource allocation and congestion avoidance, yet remains challenging due to highly dynamic topologies, intermittent connectivity, and scarce real traffic data. Existing approaches are largely limited to short-term prediction or assume static spatial dependencies, making them inadequate for non-stationary LEO environments. To address these challenges, this paper proposes DST-DEformer, a dynamic spatial–temporal Transformer framework that jointly models evolving inter-satellite topology and multi-scale temporal dependencies. Specifically, a topology-adaptive graph convolution module captures time-varying spatial correlations, while a dual temporal encoder decouples long-term global trend modeling from short-term local fluctuation learning. In addition, a hybrid simulation–calibration framework is developed to generate realistic satellite traffic by incorporating orbital dynamics, demographic information, and real-world traffic trends. Extensive experiments on simulated LEO satellite traffic and the PEMS08 benchmark show that DST-DEformer consistently outperforms state-of-the-art methods in long-term prediction, achieving 4%-13% reductions in MSE and MAE and significantly slower error accumulation as the prediction horizon increases. These results demonstrate the effectiveness and robustness of DST-DEformer for long-term traffic prediction under dynamic network topologies. | 10.1109/TNSM.2026.3693648 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/km2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Yuxiang Wang, Jiao Zhang, Leixin Cai, Tao Huang | Mercury: Multipath Spraying for Joint Congestion and Reordering Control in RDMA | 2026 | Early Access | Due to the low entropy traffic characteristics of LLM (Large Language Model) training, existing load balancing mechanisms such as Equal-Cost Multi-Path (ECMP) fail to fully utilize the redundant bandwidth between computing nodes in RDMA over Converged Ethernet (RoCE). Packet spraying mechanism has become a typical solution to the load balancing problem in RoCEs. However, it has a negative effect on congestion control mechanisms and suffers severe out-of-order problems. In this paper, we propose Mercury, a host-driven spraying scheme that synergizes congestion feedback and reordering control. Mercury selects paths by leveraging ECN, RTT, and reordering metrics, adjusts rates via multi-metric window. It also employs receiver-side buffers with priority-based dropping to mitigate out-of-order penalties. Evaluations in ns-3 under AllReduce and All-to-All traffic show that Mercury consistently outperforms the ECMP-based baselines, including DCQCN, TIMELY, HPCC, SWIFT, and BOLT, with the largest reduction in Max FCT reaching 63%. Under multi-path load balancing, Mercury delivers the lowest Max FCT for large messages in AllReduce and for most message sizes in All-to-All. It outperforms STRACK and MP-RDMA by up to 28% and 35% in AllReduce, and by up to 25% and 30% in All-to-All. | 10.1109/TNSM.2026.3692452 | |
| Shaimaa Alkaabi, Mark A Gregory, Shuo Li | A Stateless Orchestrated Handover Protocol for Multi-Access Edge Computing | 2026 | Early Access | In Multi-access Edge Computing (MEC) environments, session continuity during user mobility remains a pressing challenge due to decentralized infrastructure and high-throughput, latency-sensitive applications. Existing mobility protocols often rely on stateful mechanisms or centralized control, leading to increased signaling overhead, limited scalability, and vulnerability to performance degradation in dynamic networks. This paper introduces the Server Search and Select Algorithm Protocol (SSSAP), a lightweight, UDP-based handover protocol tailored for MEC deployments. The protocol is an extension of our previous work on a handover Server Search and Selection Algorithm (SSSA). SSSAP enables seamless session redirection through a three-phase signaling scheme (pre-handover, handover initiation, and handover termination), preserving service continuity without coupling session state to transport layers. The protocol’s design features extensible headers for multi-metric evaluation and future security adaptation while maintaining minimal dependency on intermediary control nodes. Through extensive simulation and testing, we have validated the SS-SAP efficiency across user equipment nodes and MEC servers. Results demonstrate high handover success rates, low-session setup delays, and balanced server load distribution. SSSAP achieves superior performance in mobility robustness, packet loss mitigation, and integration simplicity. The research outcomes position SSSAP as a scalable and application-agnostic mobility protocol for MEC systems, especially in vehicular and high-mobility scenarios. | 10.1109/TNSM.2026.3692555 | |
| Claudia Canali, Giuseppe Di Modica, Francesco Faenza, Luca Foschini, Riccardo Lancellotti, Domenico Scotece | OptiFog: A Framework to Optimize the Placement of Microservices in Fog Scenarios | 2026 | Vol. 23, Issue | Microservice architectures Genetic algorithms Quality of service Edge computing Optimization Internet of Things Energy consumption Software Prototypes Emulation Microservices placement fog computing genetic algorithms framework performance evaluation fog federation software platform | The Fog computing paradigm makes use of dispersed, diverse, and resource-limited devices located at the network edge to effectively implement Internet of Things (IoT) application services that demand low latency and substantial bandwidth. At the same time, the adoption of microservice-based architectures in the IoT domain is on the rise due to their ability to align with the swift evolution and deployment demands of highly dynamic IoT applications and to elastically scale to fulfill load demands. In complex environments like Fog federations, characterized by highly heterogeneous computing and networking resources, the effective allocation of microservices to available nodes, while ensuring compliance with required Quality of Service (QoS) constraints, represents a significant challenge. In this paper, we present the design and implementation of OptiFog, a comprehensive framework that enables users to model, simulate, and validate microservice placement solutions within a realistic testbed environment. Compared to state-of-the-art approaches, OptiFog offers developers a controlled environment for experimenting with placement solutions while providing the assurance that the resulting deployments will meet the targeted QoS requirements in real-world scenarios, specifically in terms of service execution time and energy consumption of Fog nodes. To demonstrate the feasibility of the proposed approach, we implemented and evaluated a representative use case, involving both sub-optimal and optimal microservice placement, and utilizing real-world microservices drawn from the IoT domain. | 10.1109/TNSM.2025.3648449 |
| Jack Wilkie, Hanan Hindy, Craig Michie, Christos Tachtatzis, James Irvine, Robert Atkinson | A Novel Contrastive Loss for Zero-Day Network Intrusion Detection | 2026 | Vol. 23, Issue | Contrastive learning Anomaly detection Training Autoencoders Training data Detectors Data models Vectors Telecommunication traffic Network intrusion detection Internet of Things network intrusion detection machine learning contrastive learning | Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e., other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches. | 10.1109/TNSM.2026.3652529 |
| Pieter Moens, Bram Steenwinckel, Femke Ongenae, Bruno Volckaert, Sofie Van Hoecke | Toward Context-Aware Anomaly Detection for AIOps in Microservices Using Dynamic Knowledge Graphs | 2026 | Vol. 23, Issue | Microservice architectures Monitoring Anomaly detection Benchmark testing Knowledge graphs Costs Topology Real-time systems Scalability Observability Anomaly detection microservices knowledge graph dynamic graphs knowledge graph embedding AIOps context | Microservice applications are omnipresent due to their advantages, such as scalability, flexibility and consequentially resource cost efficiency. The loosely-coupled microservices can be easily added, replicated, updated and/or removed to address the changing workload. However, the distributed and dynamic nature of microservice architectures introduces a complexity with regard to monitoring and observability, which is paramount to ensure reliability, especially in critical domains. Anomaly detection has become an important tool to automate microservice monitoring and detect system failures. Nevertheless, state-of-the-art solutions assume the topology of the monitored application to remain static over time and fail to account for the dynamic changes the application, and the infrastructure it is deployed on, undergoes. This paper tackles these shortcomings by introducing a context-aware anomaly detection methodology using dynamic knowledge graphs to capture contextual features which describe the evolving state of the monitored system. Our methodology leverages resource and network monitoring to capture dependencies between microservices, and the infrastructure they are running on. In addition to the methodology for anomaly detection, this paper presents an open-source benchmark framework for context-aware anomaly detection that includes monitoring, fault injection and data collection. The evaluation on this benchmark shows that our methodology consistently outperforms the non-contextual baselines. These results underscore the importance of contextual awareness for robust anomaly detection in complex, topology-driven systems. Beyond these achieved improvements, our benchmark establishes a reproducible and extensible foundation for future research, facilitating the experimentation with broader ranges of models and a continued advancement in context-aware anomaly detection. | 10.1109/TNSM.2026.3652304 |
| Marco Polverini, Andrés García-López, Juan Luis Herrera, Santiago García-Gil, Francesco G. Lavacca, Antonio Cianfrani, Jaime Galan-Jimenez | Avoiding SDN Application Conflicts With Digital Twins: Design, Models and Proof of Concept | 2026 | Vol. 23, Issue | Digital twins Analytical models Routing Delays Data models Reliability Switches Software defined networking Routing protocols Reviews Network digital twin SDN data plane SLA | Software-Defined Networking (SDN) enables flexible and programmable control over network behavior through the deployment of multiple control applications. However, when these applications operate simultaneously, each pursuing different and potentially conflicting objectives, unexpected interactions may arise, leading to policy violations, performance degradation, or inefficient resource usage. This paper presents a Digital Twin (DT)-based framework for the early detection of such application-level conflicts. The proposed framework is lightweight, modular, and designed to be seamlessly integrated into real SDN controllers. It includes multiple DT models capturing different network aspects, including end-to-end delay, link congestion, reliability, and carbon emissions. A case study in a smart factory scenario demonstrates the framework’s ability to identify conflicts arising from coexisting applications with heterogeneous goals. The solution is validated through both simulation and proof-of-concept implementation tested in an emulated environment using Mininet. The performance evaluation shows that three out of four DT models achieve a precision above 90%, while the minimum recall across all models exceeds 84%. Moreover, the proof of concept confirms that what-if analyses can be executed in a few milliseconds, enabling timely and proactive conflict detection. These results demonstrate that the framework can accurately detect conflicts and deliver feedback fast enough to support timely network adaptation. | 10.1109/TNSM.2026.3652800 |
| Jian Ye, Lisi Mo, Gaolei Fei, Yunpeng Zhou, Ming Xian, Xuemeng Zhai, Guangmin Hu, Ming Liang | TopoKG: Infer Internet AS-Level Topology From Global Perspective | 2026 | Vol. 23, Issue | Business Topology Routing Internet Knowledge graphs Accuracy Network topology Probabilistic logic Inference algorithms Border Gateway Protocol AS-level topology business relationship hierarchical structure knowledge graph global perspective | Internet Autonomous System (AS) level topology includes AS topology structure and AS business relationships, describes the essence of Internet inter-domain routing, and is the basis for Internet operation and management research. Although the latest topology inference methods have made significant progress, those relying solely on local information struggle to eliminate inference errors caused by observation bias and data noise due to their lack of a global perspective. In contrast, we not only leverage local AS link features but also re-examine the hierarchical structure of Internet AS-level topology, proposing a novel inference method called topoKG. TopoKG introduces a knowledge graph to represent the relationships between different elements on a global scale and the business routing strategies of ASes at various tiers, which effectively reduces inference errors resulting from observation bias and data noise by incorporating a global perspective. First, we construct an Internet AS-level topology knowledge graph to represent relevant data, enabling us to better leverage the global perspective and uncover the complex relationships among multiple elements. Next, we employ knowledge graph meta paths to measure the similarity of AS business routing strategies and introduce this global perspective constraint to infer the AS business relationships and hierarchical structure iteratively. Additionally, we embed the entire knowledge graph upon completing the iteration and conduct knowledge inference to derive AS business relationships. This approach captures global features and more intricate relational patterns within the knowledge graph, further enhancing the accuracy of AS-level topology inference. Compared to the state-of-the-art methods, our approach achieves more accurate AS-level topology inference, reducing the average inference error across various AS link types by up to 1.2 to 4.4 times. | 10.1109/TNSM.2026.3652956 |
| Shagufta Henna, Upaka Rathnayake | Hypergraph Representation Learning-Based xApp for Traffic Steering in 6G O-RAN Closed-Loop Control | 2026 | Vol. 23, Issue | Open RAN Resource management Ultra reliable low latency communication Throughput Heuristic algorithms Computer architecture Accuracy 6G mobile communication Seals Real-time systems Open radio access network (O-RAN) intelligent traffic steering link prediction for traffic management | This paper addresses the challenges in resource allocation within disaggregated Radio Access Networks (RAN), particularly when dealing with Ultra-Reliable Low-Latency Communications (uRLLC), enhanced Mobile Broadband (eMBB), and Massive Machine-Type Communications (mMTC). Traditional traffic steering methods often overlook individual user demands and dynamic network conditions, while multi-connectivity further complicates resource management. To improve traffic steering, we introduce Tri-GNN-Sketch, a novel graph-based deep learning approach employing Tri-subgraph sampling to enhance link prediction in Open RAN (O-RAN) environments. Link prediction refers to accurately forecasting optimal connections between users and network resources using current and historical measurements. Tri-GNN-Sketch is trained on real-world 4G/5G RAN monitoring data. The model demonstrates robust performance across multiple metrics, including precision, recall, F1 score, and ROC-AUC, effectively modeling interfering nodes for accurate traffic steering. We further propose Tri-HyperGNN-Sketch, which extends the approach to hypergraph modeling, capturing higher-order multi-node relationships. Using link-level simulations based on Channel Quality Indicator (CQI)-to-modulation mappings and LTE transport block size specifications, we evaluate throughput and packet delay for Tri-HyperGNN-Sketch. Tri-HyperGNN-Sketch achieves an exceptional link prediction accuracy of 99.99% and improved network-level performance, including higher effective throughput and lower packet delay compared to Tri-GNN-Sketch (95.1%) and other hypergraph-based models such as HyperSAGE (91.6%) and HyperGCN (92.31%) for traffic steering in complex O-RAN deployments. | 10.1109/TNSM.2026.3654534 |