Last updated: 2026-02-14 05:01 UTC
All documents
Number of pages: 156
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Qichen Luo, Zhiyun Zhou, Ruisheng Shi, Lina Lan, Qingling Feng, Qifeng Luo, Di Ao | Revisit Fast Event Matching-Routing for High Volume Subscriptions | 2026 | Early Access | Real-time systems Vectors Search problems Indexing Filters Data structures Classification algorithms Scalability Routing Partitioning algorithms Content-based Publish/subscribe Event Matching Existence Problem Matching Time Subscription Aggregation | Although many scalable event matching algorithms have been proposed to achieve scalability for publish/subscribe services, the content-based pub/sub system still suffer from performance deterioration when the system has large numbers of subscriptions, and cannot support the requirements of real-time pub/sub data services. In this paper, we model the event matching problem as an existence problem which only care about whether there is at least one matching subscription in the given subscription set, differing from existing works that try to speed up the time-consuming search operation to find all matching subscriptions. To solve this existence problem efficiently, we propose DLS (Discrete Label Set), a novel subscription and event representation model. Based on the DLS model, we propose an event matching algorithm with O(Nd) time complexity to support real-time event matching for a large volume of subscriptions and high event arrival speed, where Nd is the node degree in overlay network. Experimental results show that the event matching performance can be improved by several orders of magnitude compared with traditional algorithms. | 10.1109/TNSM.2026.3664517 |
| Zhiwei Yu, Chengze Du, Heng Xu, Ying Zhou, Bo Liu, Jialong Li | REACH: Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks | 2026 | Early Access | Graphics processing units Computational modeling Reliability Processor scheduling Costs Biological system modeling Artificial intelligence Reinforcement learning Transformers Robustness Community GPU platforms Reinforcement learning Task scheduling Distributed AI infrastructure | Community GPU(Graphics Processing Unit) platforms are emerging as a cost-effective and democratized alternative to centralized GPU clusters for AI(Artificial Intelligence) workloads, aggregating idle consumer GPUs from globally distributed and heterogeneous environments. However, their extreme hardware/software diversity, volatile availability, and variable network conditions render traditional schedulers ineffective, leading to suboptimal task completion. In this work, we present REACH (Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks), a Transformer-based reinforcement learning framework that redefines task scheduling as a sequence scoring problem to balance performance, reliability, cost, and network efficiency. By modeling both global GPU states and task requirements, REACH learns to adaptively co-locate computation with data, prioritize critical jobs, and mitigate the impact of unreliable resources. Extensive simulation results show that REACH improves task completion rates by up to 17%, more than doubles the success rate for high-priority tasks, and reduces bandwidth penalties by over 80% compared to state-of-the-art baselines. Stress tests further demonstrate its robustness to GPU churn and network congestion, while scalability experiments confirm its effectiveness in large-scale, high-contention scenarios. | 10.1109/TNSM.2026.3663316 |
| Abdinasir Hirsi, Mohammed A. Alhartomi, Lukman Audah, Mustafa Maad Hamdi, Adeb Salah, Godwin Okon Ansa, Salman Ahmed, Abdullahi Farah | Hybrid CNN-LSTM Model for DDoS Detection and Mitigation in Software-Defined Networks | 2026 | Early Access | Prevention and mitigation Denial-of-service attack Feature extraction Electronic mail Computer crime Accuracy Security Deep learning Convolutional neural networks Real-time systems CNN-LSTM Deep Learning DDoS attack Machine Learning Network Security SDN security SDN Vulnerabilities | Software-Defined Networking (SDN) enhances programmability and control but remains highly vulnerable to distributed denial-of-service (DDoS) attacks. Existing solutions often adapt conventional methods without leveraging SDN’s native features or addressing real-time mitigation. This study introduces a novel hybrid deep learning framework for DDoS detection and mitigation in SDN, significantly advancing the state of the art. We develop a custom dataset in a Mininet–Ryu testbed that reflects realistic SDN traffic conditions, and employ a multistage feature selection pipeline to reduce redundancy and highlight the most discriminative flow attributes. A hybrid Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) model is then applied, capturing both spatial and temporal traffic patterns. The proposed system achieves 99.5% accuracy and a 97.7% F1-score, demonstrating a significant improvement over baseline ML and DL approaches. In addition, a lightweight and scalable mitigation module embedded in the SDN controller dynamically drops or reroutes malicious flows, enabling real-time, low-latency responsiveness. Experimental results across diverse topologies confirm the framework’s scalability and applicability in real-world SDN environments. | 10.1109/TNSM.2026.3662819 |
| Mohammad Amir Dastgheib, Hamzeh Beyranvand, Jawad A. Salehi | Shannon Entropy for Load-Balanced Cellular Network Planning: Data-Driven Voronoi Optimization of Base-Station Locations | 2026 | Early Access | Shape Entropy Costs Cost function Planning Measurement Load management Cellular networks Uncertainty Telecommunications Network planning Base-station placement Shannon entropy Machine learning Stochastic shape optimization Nearest neighbor methods Facility location | In this paper, we introduce a stochastic shape optimization technique for base-station placement in cellular wireless communication networks. We formulate the data-driven facility location problem in a gradient-based framework and propose an algorithm that computes stochastic gradients efficiently via nearest-neighbor evaluations on Voronoi diagrams. This enables the use of Shannon-entropy objectives that promote balanced coverage and yield more than two orders of magnitude reduction in per-iteration runtime compared to a conventional integral-based optimization that assumes full knowledge of the under-lying density, making the proposed approach practical for real deployments. We highlight the requirements of facility location balancing problems with the introduction of the Adjusted Entropy Ratio and show a significant improvement in load balancing compared to the baseline algorithms, particularly in scenarios where baseline algorithms fall short in subdividing crowded areas for more equitable coverage. A downlink telecom evaluation with realistic propagation and interference models further shows that the proposed method configuration substantially improves user-rate fairness and load balance. Our results also show that Self-Organizing Maps (SOMs) provide an effective initialization by capturing the structure of the users’ location data. | 10.1109/TNSM.2026.3663045 |
| Rajasekhar Dasari, Sanjeet Kumar Nayak | PR-Fog: An Efficient Task Priority-based Reliable Provisioning of Resources in Fog-Enabled IoT Networks | 2026 | Early Access | Reliability Internet of Things Costs Energy consumption Cloud computing Edge computing Quality of service Energy efficiency Analytical models Resource management Internet of Things (IoT) Fog Computing Energy Latency Task Priority Reliability Analytical Modeling | As the demand for real-time data processing grows, fog computing emerges as an alternative to cloud computing, which brings computation and storage closer to IoT devices. In Fog-enabled IoT networks, provisioning of fog nodes for task processing must consider factors, such as latency, energy consumption, cost, and reliability. This paper presents PR-Fog, a scheme for optimizing the provisioning of heterogeneous fog nodes in fog-enabled IoT networks, considering parameters such as task priority, energy efficiency, cost efficiency, and reliability. At first, we create an analytical framework using M/M/1/C priority queuing system to assess the reliability of these heterogeneous fog nodes. Building on this analysis, we propose an algorithm that determines the optimal number of reliable fog nodes while satisfying latency, energy, and cost constraints. Extensive simulations show significant enhancements in key performance metrics when comparing PR-Fog to existing schemes, including a 36% decrease in response time and an 8% improvement in satisfaction ratio, resulting in minimized 23% fog node provisioning costs. Additionally, PR-Fog’s effectiveness is validated through real testbed experiments. | 10.1109/TNSM.2026.3661745 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Muhammad Fahimullah, Michel Kieffer, Sylvaine Kerboeuf, Shohreh Ahvar, Maria Trocan | Decentralized Coalition Formation of Infrastructure Providers for Resource Provisioning in Coverage Constrained Virtualized Mobile Networks | 2026 | Early Access | Indium phosphide III-V semiconductor materials Resource management Games Costs Wireless communication Quality of service Collaboration Protocols Performance evaluation Resource provisioning wireless virtualized networks coverage integer linear programming coalition formation hedonic approach | The concept of wireless virtualized networks enables Mobile Virtual Network Operators (MVNOs) to utilize resources made available by multiple Infrastructure Providers (InPs) to set up a service. Nevertheless, existing centralized resource provisioning approaches fail to address such a scenario due to conflicting objectives among InPs and their reluctance to share private information. This paper addresses the problem of resource provisioning from several InPs for services with geographic coverage constraints. When complete information is available, an Integer Linear Program (ILP) formulation is provided, along with a greedy solution. An alternative coalition formation approach is then proposed to build coalitions of InPs that satisfy the constraints imposed by an MVNO, while requiring only limited information sharing. The proposed solution adopts a hedonic game-theoretic approach to coalition formation. For each InP, the decision to join or leave a coalition is made in a decentralized manner, relying on the satisfaction of service requirements and on individual profit. Simulation results demonstrate the applicability and performance of the proposed solution. | 10.1109/TNSM.2026.3663437 |
| Jordan F. Masakuna, Djeff K. Nkashama, Arian Soltani, Marc Frappier, Pierre M. Tardif, Froduald Kabanza | Enhancing Anomaly Alert Prioritization through Calibrated Standard Deviation Uncertainty Estimation with an Ensemble of Auto-Encoders | 2026 | Early Access | Uncertainty Standards Measurement Anomaly detection Calibration Bayes methods Predictive models Computer security Reliability Monitoring Auto-Encoders Security Anomaly Detection Alert Prioritization Uncertainty Estimation | Deep auto-encoders (AEs) are widely employed deep learning methods in the field of anomaly detection across diverse domains (e.g., cybersecurity analysts managing large volumes of alerts, or medical practitioners monitoring irregular patient signals). In such contexts, practitioners often face challenges of scale and limited processing resources. To cope, strategies such as false positive reduction, human-in-the-loop review, and alert prioritization are commonly adopted. This paper explores the integration of uncertainty quantification (UQ) methods into alert prioritization for anomaly detection using ensembles of AEs. UQ models highlight doubtful classification decisions, enabling analysts to address the most certain alerts first, since higher certainty typically correlates with greater accuracy. Our study reveals a nuanced issue where applying UQ to ensembles of AEs can produce skewed distributions of large reconstruction errors (errors exceeding a pre-defined threshold), which may falsely suggest high uncertainty when standard deviation is used as the metric. Conventionally, a high standard deviation indicates high uncertainty. However, contrary to intuition, large reconstruction errors often reflect AE is strongly confident that an input is anomalous—not uncertainty about it. Moreover, ensembles of AEs generate reconstruction errors with varying ranges, complicating interpretation. To address this, we propose an extension that calibrates the standard deviation distribution of uncertainties, mitigating erroneous prioritization. Evaluation on 10 benchmark datasets demonstrates that our calibration approach improves the effectiveness of UQ methods in prioritizing alerts, while maintaining favorable trade-offs across other key performance metrics. | 10.1109/TNSM.2026.3664298 |
| Domenico Scotece, Giuseppe Santaromita, Claudio Fiandrino, Luca Foschini, Domenico Giustiniano | On the Scalability of Access and Mobility Management Function: the Localization Management Function Use Case | 2026 | Early Access | 5G mobile communication Scalability Location awareness 3GPP Quality of service Position measurement Routing Radio access networks Protocols Global navigation satellite system 5G localization 5G core SBA AMF Localization Management Function (LMF) | The adoption of Service-Based Architecture (SBA) in 5G Core Networks (5GC) has significantly transformed the design and operation of the control plane, enabling greater flexibility and agility for cloud-native deployments. While the infrastructure has initially evolved by implementing key functions, there remains significant potential for additional services, such as localization, paving the way for the integration of the Location Management Function (LMF). However, the extensive functional decomposition within SBA leads to consequences, such as the increase of control plane operations. Specifically, we observe that the additional signaling traffic introduced by the presence of the LMF overwhelms the Access and Mobility Management Function (AMF) which is responsible for authentication and mobility. In fact, in mobile positioning, each connected mobile device requires a significant amount of control traffic to support location algorithms in the 5GC. To address this scalability challenge, we analyze the impact of three well-known optimization techniques on location procedures to reduce control message traffic in the specific context of the 5GC, namely a caching system, a request aggregation system, and a service scalability system. Our solutions are evaluated in an OpenAirInterface (OAI) emulated environment with real hardware. After the analysis in the emulated environment, we select the caching system – due to its feasibility – for being analyzed in a real 5G testbed. Our results demonstrate a significant reduction in the additional overhead introduced by the LMF, improving scalability by minimizing the impact on AMF processing time up to a 50% reduction. | 10.1109/TNSM.2026.3664546 |
| Liang Kou, Xiaochen Pan, Guozhong Dong, Meiyu Wang, Chunyu Miao, Jilin Zhang, Pingxia Duan | Dynamic Adaptive Aggregation and Feature Pyramid Network Enhanced GraphSAGE for Advanced Persistent Threat Detection in Next-Generation Communication Networks | 2026 | Early Access | Feature extraction Adaptation models Computational modeling Artificial intelligence Semantics Topology Next generation networking Adaptive systems Dynamic scheduling Data models GraphSAGE Dynamic Graph Attention Mechanism Multi-Scale Feature Pyramid Advanced Persistent Threat Next-Generation Communication Networks | Advanced Persistent Threats (APTs) pose severe challenges to Next-Generation Communication Networks (NGCNs) due to their stealthiness and NGCNs’ dynamic topology, while conventional GNN-based intrusion detection systems suffer from static aggregation and poor adaptability to unseen nodes. To address these issues, this paper proposes DAA-FPN-SAGE, a lightweight graph-based detection framework integrating Dynamic Adaptive Aggregation (DAA) and Multi-Scale Feature Pyramid Network (MSFPM). Leveraging GraphSAGE’s inductive learning capability, the framework effectively models unseen nodes or subgraphs and adapts to NGCN’s dynamic changes (e.g., elastic network slicing, online AI model updates)—a key advantage for handling NGCN’s real-time topological variations. The DAA module employs multi-hop attention to dynamically assign weights to neighbors at different hop distances, enhancing capture of hierarchical dependencies in multi-stage APT attack chains. The MSFPM module fuses local-global structural information via a gated feature selection mechanism, resolving dimensional inconsistency and enriching attack behavior representation. Extensive experiments on StreamSpot, Unicorn, and DARPA TC#3 datasets demonstrate superior performance, meeting detection requirements of large-scale NGCNs. | 10.1109/TNSM.2026.3660650 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Jing Huang, Yabo Wang, Honggui Han | SCFusionLocator: A Statement-Level Smart Contract Vulnerability Localization Framework Based on Code Slicing and Multi-Modal Feature Fusion | 2026 | Early Access | Smart contracts Feature extraction Location awareness Codes Blockchains Source coding Fuzzing Security Noise Formal verification Smart Contract Vulnerability Detection Statement-level Localization Code Slicing Feature Fusion | Smart contract vulnerabilities have led to over $20 billion in losses, but existing methods suffer from coarse-grained detection, two-stage “detection-then-localization” pipelines, and insufficient feature extraction. This paper proposes SCFusionLocator, a statement-level vulnerability localization framework for smart contracts. It adopts a novel code-slicing mechanism (via function call graphs and data-flow graphs) to decompose contracts into single-function subcontracts and filter low-saliency statements, paired with source code normalization to reduce noise. A dual-branch architecture captures complementary features: the code-sequence branch uses GraphCodeBERT (with data-flow-aware masking) for semantic extraction, while the graph branch fuses call/control-flow/data-flow graphs into a heterogeneous graph and applies HGAT for structural modeling. SCFusionLocator enables end-to-end statement-level localization by framing tasks as statement classification.We release BJUT_SC02, a large dataset of over 240,000 contracts with line-level labels for 58 vulnerability types. Experiments on BJUT_SC02, SCD, and MANDO datasets show SCFusionLocator outperforms 8 conventional tools and nearly 20 ML baselines, achieving over 90% average F1 at the statement level, with better generalization to similar unknown vulnerabilities, and remains competitive in contract-level detection. | 10.1109/TNSM.2026.3664599 |
| Shiyu Yang, Qunyong Wu, Zhanchao Huang, Zihao Zhuo | SGA-Seq: Station-aware Graph Attention Sequence Network for Cellular Traffic Prediction | 2026 | Early Access | Adaptation models Predictive models Spatiotemporal phenomena Cellular networks Traffic control Computational modeling Time series analysis Accuracy Feature extraction Technological innovation Traffic prediction Graph Convolutional Network Spatiotemporal dependencies | Cellular traffic prediction is crucial for optimizing network resources and enhancing service quality. Despite progress in existing traffic prediction methods, challenges remain in capturing periodic features, spatial heterogeneity, and abnormal signals. To address these challenges, we propose a Station-aware Graph Attention Sequence Network (SGA-Seq). The core idea is to achieve accurate cellular traffic prediction by adaptively modeling station-specific spatiotemporal patterns and effectively handling complex traffic dynamics. First, we introduce a learnable temporal embedding mechanism to capture temporal features across multiple scales. Second, we design a station-aware graph attention network to model complex spatial relationships across stations. Additionally, by progressively separating regular and abnormal signals layer by layer, we enhance the model’s robustness. Experimental results demonstrate that SGA-Seq outperforms existing methods on five diverse mobile network datasets spanning different scales, including cellular traffic, mobility flow, and communication datasets. Notably, on the V-GCT dataset, our method achieves an 8.04% improvement in Root Mean Squared Error compared to the Spatiotemporal-aware Trend-Seasonality Decomposition Network. The code of SGA-Seq is available at https://github.com/OvOYu/SGA-Seq. | 10.1109/TNSM.2026.3664401 |
| Fengqi Li, Yudong Li, Lingshuang Ma, Kaiyang Zhang, Yan Zhang, Chi Lin, Ning Tong | Integrated Cloud-Edge-SAGIN Framework for Multi-UAV Assisted Traffic Offloading Based On Hierarchical Federated Learning | 2026 | Early Access | Resource management Autonomous aerial vehicles Heuristic algorithms Federated learning Internet of Things Dynamic scheduling Vehicle dynamics Atmospheric modeling Accuracy Training SAGIN Hierarchical Federated Learning traffic offloading cloud-edge-end Unmanned Aerial Vehicle | The growing number of mobile devices used by terrestrial users has significantly amplified the traffic load on cellular networks. Especially in urban environments, the high traffic demand brought about by dense user populations has bottlenecked network resources. The Space-Air-Ground-Integrated Network (SAGIN) provides a new solution to cope with this demand, enhancing data transmission efficiency through a multi-layered network structure. However, the heterogeneous and dynamic nature of SAGIN also poses significant management and resource allocation challenges. In this paper, we propose a cloud-edge-SAGIN framework for multi-UAV assisted traffic offloading based on Hierarchical Federated Learning (HFL), aiming to improve the traffic offloading ratio while optimizing the offloading resource allocation. HFL is used instead of traditional Federated Learning (FL) to solve problems such as irrational resource allocation due to heterogeneity in SAGIN. Specifically, the framework applies a hierarchical federated average algorithm and sets a reward function at the ground level, aiming to obtain better model parameters, improve model accuracy at aggregation, enhance UAV traffic offloading ratio, and optimize its scheduling and resource allocation. In addition, an improved Reinforcement Learning (RL) algorithm TD3-A4C is designed in this paper to assist UAVs in realizing intelligent decision-making, reducing communication latency, and further improving resource utilization efficiency. Simulation results demonstrate that the proposed framework and algorithms display superior performance across all dimensions and offer robust support for the comprehensive investigation of intelligent traffic offloading networks. | 10.1109/TNSM.2026.3658833 |
| Yuhao Chen, Jinyao Yan, Yuan Zhang, Lingjun Pu | WiLD: Learning-based Wireless Loss Diagnosis for Congestion Control with Ultra-low Kernel Overhead | 2026 | Early Access | Packet loss Kernel Linux Wireless networks Quantization (signal) Artificial neural networks Throughput Accuracy Real-time systems Computational modeling wireless loss diagnosis kernel implementation congestion control quantization | Current congestion control algorithms (CCAs) are inefficient in wireless networks due to the lack of distinction of congestion and wireless packet losses. In this work, we propose a simple yet effective learning-based wireless loss diagnosis (WiLD) solution for enhancing wireless congestion control. WiLD uses a neural network (NN) to accurately distinguish between wireless packet loss and congestion packet loss. To seamlessly cooperate with rule-based CCAs and make real-time decisions, we further implement WiLD in Linux kernel to avoid the frequent kernel-space communication. Specifically, we use a lightweight NN for inference and propose an integer quantization for WiLD deployment in various Linux versions. Real-world experiments and simulations demonstrate that WiLD can accurately differentiate the wireless and congestion packet loss with negligible CPU overhead (around 1% of WiLD vs. around 100% of learning-based algorithms such as Vivace and Aurora) and fast inference time (45% less compared to TensorFlow Lite). When combined with Cubic, WiLD-Cubic can achieve around 792%, 536%, 412%, 231%, 218%, 108%, 85% and 291% throughput improvement compared with BBRv2, Cubic, Westwood, Copa, Copa+, Vivace, Aurora and Indigo in the real network environment. | 10.1109/TNSM.2026.3664422 |
| Shagufta Henna, Upaka Rathnayake | Hypergraph Representation Learning-Based xApp for Traffic Steering in 6G O-RAN Closed-Loop Control | 2026 | Vol. 23, Issue | Open RAN Resource management Ultra reliable low latency communication Throughput Heuristic algorithms Computer architecture Accuracy 6G mobile communication Seals Real-time systems Open radio access network (O-RAN) intelligent traffic steering link prediction for traffic management | This paper addresses the challenges in resource allocation within disaggregated Radio Access Networks (RAN), particularly when dealing with Ultra-Reliable Low-Latency Communications (uRLLC), enhanced Mobile Broadband (eMBB), and Massive Machine-Type Communications (mMTC). Traditional traffic steering methods often overlook individual user demands and dynamic network conditions, while multi-connectivity further complicates resource management. To improve traffic steering, we introduce Tri-GNN-Sketch, a novel graph-based deep learning approach employing Tri-subgraph sampling to enhance link prediction in Open RAN (O-RAN) environments. Link prediction refers to accurately forecasting optimal connections between users and network resources using current and historical measurements. Tri-GNN-Sketch is trained on real-world 4G/5G RAN monitoring data. The model demonstrates robust performance across multiple metrics, including precision, recall, F1 score, and ROC-AUC, effectively modeling interfering nodes for accurate traffic steering. We further propose Tri-HyperGNN-Sketch, which extends the approach to hypergraph modeling, capturing higher-order multi-node relationships. Using link-level simulations based on Channel Quality Indicator (CQI)-to-modulation mappings and LTE transport block size specifications, we evaluate throughput and packet delay for Tri-HyperGNN-Sketch. Tri-HyperGNN-Sketch achieves an exceptional link prediction accuracy of 99.99% and improved network-level performance, including higher effective throughput and lower packet delay compared to Tri-GNN-Sketch (95.1%) and other hypergraph-based models such as HyperSAGE (91.6%) and HyperGCN (92.31%) for traffic steering in complex O-RAN deployments. | 10.1109/TNSM.2026.3654534 |
| Apurba Adhikary, Avi Deb Raha, Yu Qiao, Md. Shirajum Munir, Mrityunjoy Gain, Zhu Han, Choong Seon Hong | Age of Sensing Empowered Holographic ISAC Framework for nextG Wireless Networks: A VAE and DRL Approach | 2026 | Vol. 23, Issue | Array signal processing Resource management Integrated sensing and communication Wireless networks Phased arrays Hardware Arrays Real-time systems Metamaterials 6G mobile communication Integrated sensing and communication age of sensing holographic MIMO deep reinforcement learning artificial intelligence framework | This paper proposes an AI framework that leverages integrated sensing and communication (ISAC), aided by the age of sensing (AoS) to ensure the timely location updates of the users for a holographic MIMO (HMIMO)-assisted base station (BS)-enabled wireless network. The AI-driven framework aims to achieve optimized power allocation for efficient beamforming by activating the minimal number of grids from the HMIMO BS for serving the users. An optimization problem is formulated to maximize the sensing utility function, aiming to maximize the communication signal-to-interference-plus-noise ratio (SINR ${_{c}}$ ) of the received signals and beam-pattern gains to improve the sensing SINR of reflected echo signals, which in turn maximizes the achievable rate of users. A novel AI-driven framework is presented to tackle the formulated NP-hard problem that divides it into two problems: a sensing problem and a power allocation problem. The sensing problem is solved by employing a variational autoencoder (VAE)-based mechanism that obtains the sensing information leveraging AoS, which is used for the location update. Subsequently, a deep deterministic policy gradient-based deep reinforcement learning scheme is devised to allocate the desired power by activating the required grids based on the sensing information achieved with the VAE-based mechanism. Simulation results demonstrate the superior performance of the proposed AI framework compared to advantage actor-critic and deep Q-network-based methods, achieving a cumulative average SINR ${_{c}}$ improvement of 8.5 dB and 10.27 dB, and a cumulative average achievable rate improvement of 21.59 bps/Hz and 4.22 bps/Hz, respectively. Therefore, our proposed AI-driven framework guarantees efficient power allocation for holographic beamforming through ISAC schemes leveraging AoS. | 10.1109/TNSM.2026.3654889 |
| Jian Ye, Lisi Mo, Gaolei Fei, Yunpeng Zhou, Ming Xian, Xuemeng Zhai, Guangmin Hu, Ming Liang | TopoKG: Infer Internet AS-Level Topology From Global Perspective | 2026 | Vol. 23, Issue | Business Topology Routing Internet Knowledge graphs Accuracy Network topology Probabilistic logic Inference algorithms Border Gateway Protocol AS-level topology business relationship hierarchical structure knowledge graph global perspective | Internet Autonomous System (AS) level topology includes AS topology structure and AS business relationships, describes the essence of Internet inter-domain routing, and is the basis for Internet operation and management research. Although the latest topology inference methods have made significant progress, those relying solely on local information struggle to eliminate inference errors caused by observation bias and data noise due to their lack of a global perspective. In contrast, we not only leverage local AS link features but also re-examine the hierarchical structure of Internet AS-level topology, proposing a novel inference method called topoKG. TopoKG introduces a knowledge graph to represent the relationships between different elements on a global scale and the business routing strategies of ASes at various tiers, which effectively reduces inference errors resulting from observation bias and data noise by incorporating a global perspective. First, we construct an Internet AS-level topology knowledge graph to represent relevant data, enabling us to better leverage the global perspective and uncover the complex relationships among multiple elements. Next, we employ knowledge graph meta paths to measure the similarity of AS business routing strategies and introduce this global perspective constraint to infer the AS business relationships and hierarchical structure iteratively. Additionally, we embed the entire knowledge graph upon completing the iteration and conduct knowledge inference to derive AS business relationships. This approach captures global features and more intricate relational patterns within the knowledge graph, further enhancing the accuracy of AS-level topology inference. Compared to the state-of-the-art methods, our approach achieves more accurate AS-level topology inference, reducing the average inference error across various AS link types by up to 1.2 to 4.4 times. | 10.1109/TNSM.2026.3652956 |
| Ze Wei, Rongxi He, Chengzhi Song, Xiaojing Chen | Differentiated Offloading and Resource Allocation With Energy Anxiety Level Consideration in Heterogeneous Maritime Internet of Things | 2026 | Vol. 23, Issue | Internet of Things Resource management Carbon footprint Servers Reviews Packet loss Heterogeneous networks Green energy Delays Anxiety disorders Mobile edge computing task offloading resource allocation carbon footprint minimization | The popularity of maritime activities not only exacerbates the carbon footprint (CF) but also places higher demands on Maritime Internet of Things (MIoTs) to support heterogeneous MIoT devices (MIoTDs) with different prioritized tasks. High-priority tasks can be processed cooperatively via local computation, offloading to nearby MIoTDs (helpers), or offloading to edge servers to ensure their timely and successful completion. Due to the differences in energy availability and rechargeability, MIoTDs exhibit distinct energy states, impacting their operational behaviors. We propose the Energy Anxiety Level (EAL) to quantify these states: Higher EAL tends to lead to increased packet dropping and earlier shutdown. Although low-EAL MIoTDs seem preferable as helpers, their scarce residual computational resources after local task completion may cause offloaded high-priority tasks to drop or time out. Therefore, helper selection should jointly consider candidate MIoTDs’ EALs and loads to evaluate their unsuitability. This paper addresses the problem of differentiated task offloading and resource allocation in MIoTs by formulating it as a mixed integer nonlinear programming model. The objective is to minimize system-wide carbon footprint (CF), packet loss, helper unsuitability risk, and high-priority task latency. To solve this complex problem, we decompose it into two subproblems. We then design algorithms to determine optimal offloading patterns, task partitioning factors, MIoTD transmission powers, and computation resource allocation for MIoTDs and edge servers. Simulation results demonstrate that our proposal outperforms benchmarks in reducing CF and EAL, lowering high-priority task latency, and improving task completion ratio. | 10.1109/TNSM.2026.3655385 |
| Xiaofeng Liu, Naigong Zheng, Fuliang Li | Don’t Let SDN Obsolete: Interpreting Software-Defined Networks With Network Calculus | 2026 | Vol. 23, Issue | Delays Calculus Analytical models Optimization Kernel Queueing analysis Table lookup Quality of service Mathematical models Data centers Software-defined networking network calculus delay analysis performance optimization | Although Software-Defined Network (SDN) has gained popularity in real-world deployments for its flexible management paradigm, its centralized control principle leads to various known performance issues. In this paper, we propose SDN-Mirror, a novel generalized delay analytical model based on network calculus, to interpret how the performance is affected and to illustrate how to accelerate the performance as well. We first elaborate the impact of parameters on packet forwarding delay in SDN, including device capacity, flow features and cache size. Then, building upon the analysis, we establish SDN-Mirror, which acts like a mirror, capable of not only precisely representing the relation between packet forwarding delay and each parameter but also verifying the effectiveness of optimization policies. At last, we evaluate SDN-Mirror by quantifying how each parameter affects the forwarding delay under different table matching states. We also verify a performance improvement policy with the optimized SDN-Mirror and experiment results show that packet forwarding delays of kernel space matching flow, userspace matching flow and unmatched flow can be reduced by 39.8%, 20.7% and 13.2%, respectively. | 10.1109/TNSM.2026.3655704 |