Last updated: 2026-04-28 05:01 UTC
All documents
Number of pages: 162
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Jingyu Gan, Chen Guo, Chongxiang Yao | Construction and Post-Failure Reconstruction of Virtual Backbone Based on Regional Risk Difference in Wireless Sensor Networks | 2026 | Early Access | Broadcasting Broadcast technology Radio broadcasting Radio networks Communication systems Wireless sensor networks Computer networks Routing Wide area networks Network topology Wireless sensor network virtual backbone connected dominating set regional risk difference | In wireless sensor networks (WSNs), virtual backbones (VBs) are widely employed to address issues such as energy constraints and broadcast storms. WSNs are typically modeled as unit disk graphs (UDGs); a VB for data transmission is determined based on the construction of a connected dominating set (CDS) in the graph. Since sensor nodes may fail due to accidental damage or energy depletion, it is necessary to construct a CDS with fault tolerance. In fact, under the influence of complex terrain, significant altitude differences, and environmental perturbations caused by multiple factors, application scenarios frequently have significant differences in failure risk between nodes in different regions. Based on this observation, we optimize the network structure by constructing different CDS types in regions with varying risk factors, introducing the concept of a regional risk difference connected dominating set (RRD-CDS) tailored for heterogeneous hazard levels. In this paper, we enhance network robustness by constructing (k,m)-CDS in high-risk regions, while reducing the number of CDS nodes by building a global (1, 1)-CDS for other regions, thereby designing the RRD-CDS algorithm. When failures cause the RRD-CDS to lose its properties as a CDS, we design a reconstruction algorithm to restore the fault tolerance of RRD-CDS. Simulation results verify the effectiveness of both the RRD-CDS construction algorithm and the RRD-CDS reconstruction algorithm. | 10.1109/TNSM.2026.3686606 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Zhenzhen Yan, Lizhi Peng, Peiqiang Liu, Yingshuo Bao, Bo Yang | NT-Transformer: A Non-Pretrained Encrypted Network Traffic Classification Model | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Antennas Motion pictures Communication systems Internet of Things Telecommunication traffic Computer networks encrypted network traffic classification Transformers byte representation uni-gram pre-training deep learning | Network traffic classification plays an indispensable role in network management, Quality of Service (QoS), and cybersecurity. With the widespread encryption techniques applied to network traffic, it has become increasingly challenging to classify network traffic into different management groups accurately. In recent years, pre-training Transformer-based models have been successfully applied to Natural Language Processing (NLP), and researchers have also introduced such models into encrypted network traffic analysis. However, besides the similarities of words in NLP and byte codes in network traffic, there exist essential differences between them, which may cause inefficacy of the pretrained model when being applied to new traffic data. In this paper, we propose a non-pretrained encrypted network traffic classification model based on Transformer called NT-Transformer, which can directly learn labeled network traffic features at two levels of granularity, namely, byte level (uni-gram or bi-gram) and flow level (packet size and packet inter-arrival time), without the relatively expensive pre-training procedure of unlabeled data. This method is validated on three public datasets and three sets of recently collected network traffic data. Experimental results indicate that in some scenarios, pretrained models offer limited performance gains when applied to new encrypted network traffic data not encountered during pretraining, and NT-Transformer with uni-gram byte representation outperforms the state-of-the-art models in terms of pushing the F1 score up by 0.25% - 2.24%. | 10.1109/TNSM.2026.3683410 |
| Wangqing Luo, Jinbin Hu, Hua Sun, Pradip Kumar Sharma, Jin Wang | SALB: Security-Aware Load Balancing for Large Language Model Training in Datacenter Networks | 2026 | Early Access | Training Load management Packet loss Throughput Delays Topology Scheduling Telecommunication traffic Fluctuations Switches Datacenter Networks Load Balancing Data Security Deep Reinforcement Learning | To meet the massive compute and high-speed communication demands of Large Language Model (LLM) training, modern datacenters typically adopt multipath topologies such as Fat-Tree and Clos to host parallel jobs across hundreds to thousands of GPUs. However, LLM training exhibits periodic, high-bandwidth communication patterns. Existing load-balancing schemes become misaligned under dynamic congestion and anomalous surges: they struggle to promptly mitigate iteration-peak congestion and lack effective isolation of anomalous traffic. To address this, we propose Security-Aware Load Balancing (SALB) for LLM training. SALB leverages a Deep Reinforcement Learning (DRL) controller with queue and delay signals for packet-level multipath load balancing and employs path binding to confine suspicious flows. By integrating data security into load balancing, SALB simultaneously achieves high throughput and robust traffic isolation. NS-3 simulation results show that, compared with CONGA, Hermes, and ConWeave, SALB reduces the 99th-percentile flow completion time (FCT) of short flows by an average of 65% and increases the throughput of long flows by an average of 54%. It further outperforms the baselines in aggregate throughput, path utilization, and packet loss rate, thereby significantly enhancing system stability, robustness, and data security. | 10.1109/TNSM.2026.3678979 |
| Md Arif Hassan, Bui Duc Manh, Cong T. Nguyen, Chi-Hieu Nguyen, Dinh Thai Hoang, Diep N. Nguyen, Nguyen Van Huynh, Dusit Niyato | SBW 3.0: A Blockchain-Enabled Framework for Secure and Efficient Information Management in Web 3.0 | 2026 | Early Access | Jamming Protocols Semantic Web Smart contracts Consensus protocol Internet Communication systems Internet of Things Computer networks Web 2.0 Web 3.0 blockchain delegated proof-of-stake smart contract game theory non-cooperative game | In this paper, we propose an effective blockchain-enabled information management framework, named Smart Blockchain-based Web 3.0 (SBW 3.0). Our framework aims to handle information within Web 3.0 efficiently, enhance data security and privacy, create new revenue streams, and encourage users to contribute valuable information to websites. To this end, SBW 3.0 employs blockchain technology and smart contracts to manage the decentralized data collection in Web 3.0. Moreover, we introduce a robust consensus mechanism grounded in Delegated Proof-of-Stake (DPoS) to reward user contributions. Furthermore, we develop a non-cooperative game model to examine user behavior in this context and conduct thorough analysis to prove the uniqueness of the Nash equilibrium in our proposed system. Through simulations, we evaluate the performance of SBW 3.0 and analyze the effects of various critical parameters on information contribution. Our results validate the theoretical analysis, showing that the proposed consensus mechanism successfully encourages nodes and users to provide more information, thus overcoming the current limitations of Web 3.0 regarding data decentralization and management. | 10.1109/TNSM.2026.3683881 |
| Xinshuo Wang, Baihua Chen, Lei Liu, Yifei Li | Pisces: Fast Loss Recovery for Multipath Transmission in RDMA | 2026 | Early Access | Payloads Military aircraft Space technology Feeds System-on-chip Field programmable gate arrays Circuits Application specific integrated circuits Integrated circuits Feedback RDMA Loss Recovery Multipath Transmission Programmable Switch Programmable NIC FPGA | Conventional Remote Direct Memory Access (RDMA) relies on Priority Flow Control (PFC) to operate on lossless networks. However, as data centers scale, PFC’s drawbacks, such as head-of-line blocking and congestion spreading become increasingly problematic. This study proposes Pisces, a fast packet loss recovery scheme that leverages terminal–network collaboration. Instead of targeting lossless RDMA networks, Pisces enables high-throughput RDMA by efficiently handling loss recovery. To address the inefficient retransmission problems of PFC+Go-Back-N and the challenges of configuring appropriate timeouts for Selective Repeat (SR) in multipath transmission scenarios, Pisces implements Quick Drop Notification (QDN) of packet loss on switches, avoiding bandwidth waste and timeouts. In addition, Pisces RDMA NICs feature on-chip packet buffers to cache in-flight packets, supporting the scalability demands of RDMA in modern data centers. Upon receiving a QDN, lost packets are quickly retrieved from the buffer for retransmission, significantly improving retransmission efficiency and reducing PCIe bandwidth waste caused by cache replacements. This study overcame numerous challenges to implement Pisces prototype, which demonstrated excellent performance. Testbed experiments show that Pisces improves the 99th-percentile FCT by 130×compared to Mellanox CX-6. Large-scale simulations demonstrate that Pisces achieves a maximum reduction of 82.8% in the 99.9th-percentile FCT compared to SR and other state-of-the-art technologies. | 10.1109/TNSM.2026.3688038 |
| Henghua Zhang, Jue Chen, Yuhang Wu, Yujie Xiong | TT-INT: A Time-Threshold-based Lightweight In-Band Network Telemetry Scheme for P4-Enabled Programmable Networks | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Military aircraft Space technology Radio broadcasting Frequency modulation Filtering Filters Central Processing Unit In-Band Network Telemetry (INT) Programming Protocol-independent Packet Processors (P4) Software-Defined Networking (SDN) Programmable Data Plane (PDP) Per-Flow Telemetry Regulation | In-band Network Telemetry (INT) has emerged as a promising solution for fine-grained, real-time monitoring in programmable data planes. However, existing INT approaches often incur excessive overhead due to per-hop metadata accumulation or lack fine-grained control over telemetry frequency. This paper presents TT-INT, a lightweight INT framework designed for P4-enabled networks, which introduces a time-threshold-based mechanism to regulate telemetry insertion dynamically. Each switch enforces local constraints based on per-flow time intervals and metadata capacity, enabling reduced overhead while preserving path visibility without requiring global coordination or clock synchronization. Additionally, TT-INT supports a two-window byte-level anomaly detector and a controller-driven adjustment mechanism for further extensibility. Experiments on a real-world-derived backbone topology demonstrate that TT-INT reduces the average per-packet telemetry overhead to as low as 3.4 bytes under the 100 ms/5v configuration at 300 pps, achieving a 97.1% reduction compared to P4-INT under the same traffic rate. Compared to DLINT-5v and PLINT-5v (fixed at 20 and 26 bytes per packet, respectively), TT-INT-5v-100ms achieves up to 83.0% and 86.9% lower overhead. It also reaches a maximum path update detection rate of 97.9% (under the 50 ms configuration) and a minimum detection delay of 0.2 s, confirming TT-INT’s effectiveness in balancing overhead, responsiveness, and monitoring fidelity under high-throughput conditions. In addition, TT-INT improves TCP throughput by 22.9% relative to P4-INT in a BMv2-based environment, further highlighting its efficiency in resource-constrained data plane settings. | 10.1109/TNSM.2026.3688086 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Guisong Yang, Yechao Huang, Panxing Huang, Xingyu He | A Distributed SDN Controller-Based Computing Framework for Effective in-orbit Computing | 2026 | Early Access | Low earth orbit satellites Artificial satellites Aerospace and electronic systems Telemetry Antennas Antennas and propagation Central Processing Unit Software defined networking Computer networks Communication systems Task Scheduling Software Defined Network Satellite Network Placement of SDN Controller | The rapid development of Low Earth Orbit (LEO) satellite networks has made in-orbit computing more feasible, offering a solution for processing real-time, diverse user tasks. Compared with traditional cloud computing in ground cloud computing center, directly computing on the LEO satellite can significantly reduce task-processing delay. However, challenges remain, including the limited sensing and computing capabilities of satellites, high delays in processing task requests, and frequent switching of control domains due to the relative movement between LEO satellites and nodes in other orbits. To address these challenges and improve task management, computing is treated as a Virtual Network Function (VNF), managed by Software-Defined Networking (SDN) controllers. This paper proposes a distributed SDN controller-based computing framework, where task information is forwarded to SDN controllers, which then use a task scheduling strategy to allocate tasks to suitable computing nodes for processing. To support the implementation of this framework, we first propose a heuristic SDN controller placement strategy that uses a tiling method to divide the LEO satellite network into SDN control domains and places the controller at the midpoint of each domain Then, we propose a Double Deep Q-Network (DDQN) algorithm for in-orbit task scheduling, which adaptively optimizes task scheduling strategy to minimize task-processing delay and ensure a high task completion rate. Finally, Simulations are conducted in two parts to evaluate the framework. The first part validates the DDQN-based task scheduling strategy, achieving significant reductions in task-processing delay and improved task completion rates compared to conventional strategies. The second part assesses the impact of SDN control domain shape and size on task-processing delay, confirming domain size as the dominant factor influencing delay. | 10.1109/TNSM.2026.3685308 |
| Yuxuan Chen, Yuhao Xie, Zhen Zhang, Zhenyu He, Yuhui Deng, Shenlong Zheng, Dongjiong Zhu, Lin Cui | PMPHD: A High Performance Virtual Machine consolidation Strategy Based on Dynamic Threshold Adjustment | 2026 | Early Access | Central Processing Unit Filtering Filters Electronic circuits Kalman filters Circuits and systems Integrated circuits Internet Communication systems Quality of service Cloud Data Centers Adaptive Dynamic Threshold VM Migrations Service Level Agreement Violations Energy Consumption | Virtual machine (VM) consolidation strategies are widely deployed in Cloud Data Centers (CDCs) to optimize resource utilization and improve the Quality of Service (QoS). However, the host overload detection algorithms in current VM consolidation strategies are static. That means, once the overload threshold is calculated, it will not change until the next recalculation. The current algorithms are not suitable for the environment of highly dynamic workloads which results in additional energy consumption and potential Service Level Agreement Violations (SLAVs) which will affect the QoS of CDC. In PMPHD, a novel host dynamic threshold adjustment algorithm is proposed. In the proposed algorithm, the PMs are classified into mildly overloaded, normal, and severely overloaded based on the resource utilization. If the PM is predicted to be severely overloaded in the next moment, the threshold of this PM will be proactively reduced. The PM is determined to be overloaded, and some VMs in this PM will be migrated in advance. Thus, this PM will be in normal in the next moment, and the VM performance degradation resulting from SLAV and VM migration overlap in the next moment will be avoided. If the PM is predicted to be mildly overloaded, the threshold will be appropriately increased to transit it to be in normal state in the next moment, and the VM in the PM will not be migrated. Since the PMs’ workloads are dynamic, the PMPHD overload algorithm predicts the resource utilization rate of PM continuously, and adjusts the overload threshold of PM. Compared with other algorithms, PMPHD maintains high efficiency while having lower ESV (a combination metric for balancing energy consumption and SLAV). | 10.1109/TNSM.2026.3687892 |
| Yu Gu, Le Zhang, Yunyi Zhang, Ye Du | SatFedGuard: Semi-Supervised Federated Contrastive Learning with RL-Assisted Bidirectional Distillation for Anomaly Traffic Detection in Satellite Networks | 2026 | Early Access | Low earth orbit satellites Artificial satellites Payloads Jamming Electronic warfare Feeds Broadcasting Broadcast technology Filtering Filters Federated Learning Satellite Network Intrusion Detection Semi-Supervised Learning Edge-Cloud Collaboration | Federated learning-based intrusion detection methods for satellite networks enable model training without sharing local data, thereby ensuring network security while significantly reducing communication overhead. However, due to the difficulty of obtaining large-scale high-quality labeled data in satellite environments, a key challenge lies in how to train intrusion detection models using abundant unlabeled traffic data. We propose SatFedGuard, a semi-supervised federated contrastive learning approach for anomaly traffic detection in satellite networks. SatFedGuard effectively integrates unlabeled in-orbit data with labeled data from ground stations for model training. First, it models the unlabeled satellite traffic data using a contrastive learning framework. To address the challenge of non-IID data distribution, an attention-based dual-path aggregation strategy is designed to generate personalized models for each satellite by leveraging model similarities. Then, a bidirectional multi-granularity distillation method between larger and smaller models is implemented, where reinforcement learning is employed to optimize the weights of different loss terms dynamically. Experiments on two satellite network traffic datasets under non-IID settings demonstrate that the proposed method significantly improves anomaly detection performance while reducing dependence on in-orbit labeled data, achieving F1-Scores of 93.38% (↑11.63%) and 99.80% (↑8.72%), respectively. | 10.1109/TNSM.2026.3685416 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Alba Jano, Serkut Ayvaşik, Yash Deshpande, Wolfgang Kellerer | QUEST: User-Based Quality of Service Aware Uplink Resource Scheduling | 2026 | Early Access | Payloads Military aircraft Space technology Omnidirectional antennas Broadcasting Feedback Circuits Semiconductor lasers Central Processing Unit Semiconductor optical amplifiers Radio resource management quality of service user context user satisfaction energy efficiency IoTs | Efficient radio resource management (RRM) in 5G networks is increasingly challenged by the diverse quality of service (QoS) requirements of emerging applications and the growing uplink (UL) traffic from resource-constrained devices. Existing scheduling approaches often lack user and service-specific context, limiting their ability to guarantee timely and energy-efficient data transmission, particularly critical for the internet of things (IoT) and mission-critical services. In this work, we introduce QUEST, a QoS-aware UL scheduling framework that exploits the 5G QoS model alongside network and device context to efficiently allocate radio resources. Designed and evaluated in an indoor factory environment, QUEST supports users with various heterogeneous 5QI services under dynamic multi-user conditions. Evaluation results, validated through both real-world measurements and 3GPP-compliant simulations, show that QUEST consistently outperforms traditional channel- and QoS-aware schedulers. It improves QoS compliance, reduces packet drops and serving time, and enhances energy efficiency. For users with stringent QoS demands, measurements show a 13% increase in successfully transmitted packets and a 6.2% reduction in delay for 50% of transmissions, compared to the best-performing baseline. Benchmarking against an optimal scheduler shows that QUEST achieves the closest performance among baselines, while maintaining low complexity, making it a practical and scalable solution for 5G and beyond UL RRM. | 10.1109/TNSM.2026.3685537 |
| Li-Chin Siang, Wen-Hsing Kuo, Pei-Chieh Lin, Chih-Wei Huang, De-Nian Yang | FoV Prediction-Based Adaptive Streaming Mechanism for 6DoF Volumetric MR Applications in Multi-Base-Station Networks | 2026 | Early Access | Payloads Antennas Feeds Antennas and propagation Broadcasting Broadcast technology Kalman filters Filters Central Processing Unit Circuits and systems femto-cells resource allocation layer encoding 360-degree video streaming | The emergence of mixed reality (MR) as a significant application in mobile networks has garnered significant attention. Wireless headsets enable unrestricted user movement within femtocell networks comprising numerous small base stations, offering a promising solution for MR applications. However, the complexity of these systems poses challenges in optimizing resource allocation across base stations. This paper proposes a novel resource allocation method for volumetric MR streaming in multi-base-station environments. The method consists of two phases. Firstly, the method uses neural networks to model and forecast users’ viewing directions. Leveraging these predictions, their confidence levels, and layer characteristics, the algorithm adjusts video quality for each user and allocates transmission resources across base stations to optimize overall performance. Through comprehensive analysis, we prove that this novel problem is NP-hard and show that our approach achieves a performance within a bounded gap from the optimal solution. Simulation results reveal that our proposed algorithm outperforms existing techniques, enhancing aggregate performance across diverse scenarios. | 10.1109/TNSM.2026.3685670 |
| Faissal Ahmadou, Boubakr Nour, Makan Pourzandi, Mourad Debbabi, Chadi Assi | Automating Threat-Aligned Testflows Generation using Ontology-Grounded RAG from CTI Reports | 2026 | Early Access | Radio broadcasting Frequency modulation System-on-chip Filtering Circuits Feedback Filters Integrated circuits MIMICs Millimeter wave integrated circuits Cybersecurity Security Automation Testflow Generation Retrieval-Augmented Generation | The increasing sophistication and complexity of Advanced Persistent Threats (APTs) pose significant challenges to security practitioners. To proactively protect against these threats, security practitioners rely on the generation of testflows, structured sequences of actions designed to verify whether the tactics and behaviors of an APT are present within their organization. However, manually creating such testflows is time-consuming, error-prone, and highly dependent on expert knowledge. Moreover, existing automated approaches suffer from several limitations, including validity, efficiency, and insufficient domain adaptation. To address these challenges, this paper introduces CTI-RAGFlow, to automate the generation of relevant, valid, and effective testflows from unstructured threat reports tailored to specific organizational environments. CTI-RAGFlow introduces three key contributions: (i) a dual-ontology approach, that integrates both a system ontology representing the operational environment and a cybersecurity ontology capturing adversary tactics, techniques, and procedures, improving the precision and accuracy of generated testflows; (ii) a fact-based context retrieval mechanism that combines a hypergraph structured knowledge base with a Retrieval-Augmented Generation pipeline using Large Language Models; and (iii) a fully automated testflow generation process that minimizes manual effort, reduces human error, and facilitates the generation of valid testflow. We evaluate CTI-RAGFlow against three widely used LLM models (e.g., base and fine-tuned models) using publicly available CTI reports for three well-known APTs (e.g., APT41, APT29, APT28). The results show that CTI-RAGFlow outperforms the baselines in terms of semantic relevance, coverage, validity, and effectiveness in verifying multi-stage cyberattack scenarios. | 10.1109/TNSM.2026.3684808 |
| Mohammad Amir Dastgheib, Hamzeh Beyranvand, Jawad A. Salehi | Shannon Entropy for Load-Balanced Cellular Network Planning: Data-Driven Voronoi Optimization of Base-Station Locations | 2026 | Vol. 23, Issue | Shape Entropy Costs Cost function Planning Measurement Load management Cellular networks Uncertainty Telecommunications Network planning base-station placement Shannon entropy machine learning stochastic shape optimization nearest neighbor methods facility location | In this paper, we introduce a stochastic shape optimization technique for base-station placement in cellular wireless communication networks. We formulate the data-driven facility location problem in a gradient-based framework and propose an algorithm that computes stochastic gradients efficiently via nearest-neighbor evaluations on Voronoi diagrams. This enables the use of Shannon-entropy objectives that promote balanced coverage and yield more than two orders of magnitude reduction in per-iteration runtime compared to a conventional integral-based optimization that assumes full knowledge of the underlying density, making the proposed approach practical for real deployments. We highlight the requirements of facility location balancing problems with the introduction of the Adjusted Entropy Ratio and show a significant improvement in load balancing compared to the baseline algorithms, particularly in scenarios where baseline algorithms fall short in subdividing crowded areas for more equitable coverage. A downlink telecom evaluation with realistic propagation and interference models further shows that the proposed method configuration substantially improves user-rate fairness and load balance. Our results also show that Self-Organizing Maps (SOMs) provide an effective initialization by capturing the structure of the users’ location data. | 10.1109/TNSM.2026.3663045 |
| Rajasekhar Dasari, Sanjeet Kumar Nayak | PR-Fog: An Efficient Task Priority-Based Reliable Provisioning of Resources in Fog-Enabled IoT Networks | 2026 | Vol. 23, Issue | Reliability Internet of Things Costs Energy consumption Cloud computing Edge computing Quality of service Energy efficiency Analytical models Resource management Internet of Things (IoT) fog computing energy latency task priority reliability analytical modeling | As the demand for real-time data processing grows, fog computing emerges as an alternative to cloud computing, which brings computation and storage closer to IoT devices. In Fog-enabled IoT networks, provisioning of fog nodes for task processing must consider factors, such as latency, energy consumption, cost, and reliability. This paper presents PR-Fog, a scheme for optimizing the provisioning of heterogeneous fog nodes in fog-enabled IoT networks, considering parameters such as task priority, energy efficiency, cost efficiency, and reliability. At first, we create an analytical framework using M/M/1/C priority queuing system to assess the reliability of these heterogeneous fog nodes. Building on this analysis, we propose an algorithm that determines the optimal number of reliable fog nodes while satisfying latency, energy, and cost constraints. Extensive simulations show significant enhancements in key performance metrics when comparing PR-Fog to existing schemes, including a 36% decrease in response time and an 8% improvement in satisfaction ratio, resulting in minimized 23% fog node provisioning costs. Additionally, PR-Fog’s effectiveness is validated through real testbed experiments. | 10.1109/TNSM.2026.3661745 |
| Xiaoshan Yu, Huaxi Gu, Qian Zhang | RCC: Rate-Based Congestion Control for the Lossless Network | 2026 | Vol. 23, Issue | Flow production systems Switches Propagation losses Interference Bandwidth Traffic control Protocols Accuracy Topology Packet loss Lossless networks congestion control transport protocol | It has been widely accepted that hop-by-hop flow control is applied to High Performance Computing (HPC) interconnect network to ensure lossless transmission. However, hop-by-hop flow control directly interferes with existing congestion control because of inaccurate congestion detection. The aim of this study is to eliminate the interference of hop-by-hop flow control on congestion detection and the interference of flow rate changes on rate adjustment in lossless networks. We designed Rate-based Congestion Control (RCC), which includes a new congestion detection mechanism based on the source sending rate. Combined with congestion detection, we designed an individual rate control mechanism that slows down congested flows and accelerating victim flows. The extensive simulation results based on general traffic patterns and benchmark for HPC systems show that compared with the existing congestion control strategies of the lossless networks, RCC improves 99th percentile FCT performance by $12.55\sim 29.63$ %, and the maximum reduction in congestion impact reaches 40.34%. | 10.1109/TNSM.2026.3661289 |