Last updated: 2026-04-30 05:01 UTC
All documents
Number of pages: 162
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Shahid Mahmood, Moneeb Gohar, Seok Joo Koh | Globally Integrated Trust Authority (GITA) for Resource-Constrained Edge Devices in IoT and 6G | 2026 | Early Access | Payloads Filtering Central Processing Unit Filters Feedback Circuits Electronic circuits Microcontrollers Circuits and systems Microprocessors GITA Globally Integrated Trust Authority Network PKDL TSL LMS Security Trust Management Resource Constrained Edge Device Internet of Things and Cyber-Attack | The rapid growth of the Internet and the increasing number of edge devices have expanded the cyber-attack surface at the edge layer. Hackers exploit vulnerabilities at various levels of a network by either directly connecting to it or accessing it over the Internet. In both scenarios, edge devices remain a primary target due to their widespread use, limited resources and critical impact. Therefore, securing edge devices is essential to counter both local and global cyber threats. Trust is a key factor in determining the level of protection required for edge devices. It can be used to assess the reliability of other devices before offering or requesting services. Since edge devices are often globally interconnected, trust levels should be verifiable across the Internet and intranet. In this paper, we propose the Globally Integrated Trust Authority (GITA), a framework that distributes verifiable trust values across networks and the Internet while minimizing communication overhead. Experimental results demonstrate that GITA improves the efficiency of trust value distribution and verification among nodes compared to digital certificates, while maintaining the same level of protection.. This approach enables effective identification of malicious and benign nodes, enhancing the precision of malicious node detection locally and globally. | 10.1109/TNSM.2026.3687967 |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 | |
| Hanlin Chen, Fukang Deng, Tengcong Jiang, Weitao Xu, Yuezhong Wu, Xing Chen, Jie Li | Enhancing Throughput in Sharded Blockchain via Joint Convex Optimization of System Parameters and Resource Allocation | 2026 | Early Access | Sharding is a promising technique for improving the throughput and scalability of blockchain systems. However, the imbalanced distribution of transactions across shards poses a major challenge: overloaded shards lead to congestion, while underutilized shards result in wasted resources, both of which hinder system performance. To obtain high throughput, it is crucial to adjust system parameters and allocate resources efficiently. In this paper, we formulate the problem, and propose a joint convex optimization framework that optimizes system parameters and resource allocation to enhance throughput in sharded blockchain systems. Unlike previous studies that focus solely on only one of these aspects, our approach integrates both. By leveraging the block coordinate descent method from convex optimization theory, our proposed approach iteratively solves two interdependent subproblems, system parameter tuning and resource allocation, leading to a near-optimal solution with guaranteed convergence. Extensive experiments demonstrate that our joint optimization algorithm achieves a near-optimal solution within 5 seconds and improves throughput by 3.32% to 14.56% compared to benchmark methods. These results validate the effectiveness of our approach in enhancing both the throughput and scalability of sharded blockchain systems. | 10.1109/TNSM.2026.3688662 | |
| Jayasree Sengupta, Mike Kosek, Justus Fries, Veronika Kitsul, Vaibhav Bajpai | A Long-term View of DNS over QUIC Adoption and its Performance Impact on YouTube Streaming | 2026 | Early Access | YouTube contributes the largest share of global video traffic on the Internet, making it an important use case for understanding the impact of evolving DNS protocol choices on video streaming performance. Although traditional DNS over UDP (DoUDP) offers low latency, it lacks modern transport features. Encrypted DNS protocols such as DNS over TLS (DoT) and DNS over HTTPS (DoH) improve protocol robustness but suffer from higher latency due to their underlying transport and encryption protocols with multi-RTT handshakes. However, recently standardized DNS over QUIC (DoQ) aims to combine the best of both worlds by leveraging the transport efficiency of QUIC while ensuring DNS privacy. In this paper, we present the first comprehensive long-term measurement study of DoQ adoption and evaluate its performance implications for YouTube video streaming. We collect data through weekly scans of the IPv4 address space over a two-year period to assess the adoption of the protocol. Our results show that DoQ adoption by public DNS resolvers has steadily increased and plateaued over 25 months. Using seven globally distributed vantage points, our video performance measurements shows that DoQ’s DNS lookup time increases by only 1.5% in the median while video startup delay increases by less than 1% compared to DoUDP. In particular, in about 40% of the cases, DoQ yields faster video startup times than DoUDP. These findings position DoQ as a technically efficient DNS protocol, well suited for modern, high-demand performance-sensitive applications such as video streaming. | 10.1109/TNSM.2026.3688441 | |
| Henghua Zhang, Jue Chen, Yuhang Wu, Yujie Xiong | TT-INT: A Time-Threshold-based Lightweight In-Band Network Telemetry Scheme for P4-Enabled Programmable Networks | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Military aircraft Space technology Radio broadcasting Frequency modulation Filtering Filters Central Processing Unit In-Band Network Telemetry (INT) Programming Protocol-independent Packet Processors (P4) Software-Defined Networking (SDN) Programmable Data Plane (PDP) Per-Flow Telemetry Regulation | In-band Network Telemetry (INT) has emerged as a promising solution for fine-grained, real-time monitoring in programmable data planes. However, existing INT approaches often incur excessive overhead due to per-hop metadata accumulation or lack fine-grained control over telemetry frequency. This paper presents TT-INT, a lightweight INT framework designed for P4-enabled networks, which introduces a time-threshold-based mechanism to regulate telemetry insertion dynamically. Each switch enforces local constraints based on per-flow time intervals and metadata capacity, enabling reduced overhead while preserving path visibility without requiring global coordination or clock synchronization. Additionally, TT-INT supports a two-window byte-level anomaly detector and a controller-driven adjustment mechanism for further extensibility. Experiments on a real-world-derived backbone topology demonstrate that TT-INT reduces the average per-packet telemetry overhead to as low as 3.4 bytes under the 100 ms/5v configuration at 300 pps, achieving a 97.1% reduction compared to P4-INT under the same traffic rate. Compared to DLINT-5v and PLINT-5v (fixed at 20 and 26 bytes per packet, respectively), TT-INT-5v-100ms achieves up to 83.0% and 86.9% lower overhead. It also reaches a maximum path update detection rate of 97.9% (under the 50 ms configuration) and a minimum detection delay of 0.2 s, confirming TT-INT’s effectiveness in balancing overhead, responsiveness, and monitoring fidelity under high-throughput conditions. In addition, TT-INT improves TCP throughput by 22.9% relative to P4-INT in a BMv2-based environment, further highlighting its efficiency in resource-constrained data plane settings. | 10.1109/TNSM.2026.3688086 |
| Yuxuan Chen, Yuhao Xie, Zhen Zhang, Zhenyu He, Yuhui Deng, Shenlong Zheng, Dongjiong Zhu, Lin Cui | PMPHD: A High Performance Virtual Machine consolidation Strategy Based on Dynamic Threshold Adjustment | 2026 | Early Access | Central Processing Unit Filtering Filters Electronic circuits Kalman filters Circuits and systems Integrated circuits Internet Communication systems Quality of service Cloud Data Centers Adaptive Dynamic Threshold VM Migrations Service Level Agreement Violations Energy Consumption | Virtual machine (VM) consolidation strategies are widely deployed in Cloud Data Centers (CDCs) to optimize resource utilization and improve the Quality of Service (QoS). However, the host overload detection algorithms in current VM consolidation strategies are static. That means, once the overload threshold is calculated, it will not change until the next recalculation. The current algorithms are not suitable for the environment of highly dynamic workloads which results in additional energy consumption and potential Service Level Agreement Violations (SLAVs) which will affect the QoS of CDC. In PMPHD, a novel host dynamic threshold adjustment algorithm is proposed. In the proposed algorithm, the PMs are classified into mildly overloaded, normal, and severely overloaded based on the resource utilization. If the PM is predicted to be severely overloaded in the next moment, the threshold of this PM will be proactively reduced. The PM is determined to be overloaded, and some VMs in this PM will be migrated in advance. Thus, this PM will be in normal in the next moment, and the VM performance degradation resulting from SLAV and VM migration overlap in the next moment will be avoided. If the PM is predicted to be mildly overloaded, the threshold will be appropriately increased to transit it to be in normal state in the next moment, and the VM in the PM will not be migrated. Since the PMs’ workloads are dynamic, the PMPHD overload algorithm predicts the resource utilization rate of PM continuously, and adjusts the overload threshold of PM. Compared with other algorithms, PMPHD maintains high efficiency while having lower ESV (a combination metric for balancing energy consumption and SLAV). | 10.1109/TNSM.2026.3687892 |
| Xiaoyong Zhang, Wei Yue, Lei Zhu | Countermeasure Design for Large-Scale UAV Swarm Based on Splitting Attack | 2026 | Early Access | Jamming Weapons Electronic warfare Aerospace control Guns Aerospace and electronic systems Military equipment Antennas MIMICs System-on-chip UAV swarm split attack mixed-integer quadratic programming critical node search algorithm | To counteract the invasion of a large-scale Unmanned Aerial Vehicle (UAV) swarm, this paper proposes an attack strategy for effectively splitting the UAV swarm. This attack strategic approach aims to split the UAV swarm into multiple independent fragment networks and then destroy only a limited number of critical UAVs from the independent networks, which is expected to disrupt the entire UAV swarm's global communication and cooperation capabilities. Firstly, the countermeasure principle of UAV swarm splitting is given, followed by the proposal of a novel metric method to assess the connectivity and communication rate of either the entire network or its fragmented networks. Secondly, a Non-convex Mixed-integer Quadratic Programming (NMIQP) model is developed that aims to simultaneously minimize pair-to-pair connectivity between network nodes and decentralize the entire network. Then, to achieve efficient splitting, a Critical Node Search Algorithm (CNSA) with fast and high-level optimization capability is proposed, which is a mixture of the Ant Colony Accumulation Algorithm (ACAA) and the Improved Genetic Algorithm (IGA). ACAA identifies nodes with high connectivity in the network by planning global routes, while IGA is utilized to solve the corresponding optimization problem. Finally, simulations confirm the effectiveness and superiority of the proposed strategy. | 10.1109/TNSM.2026.3687655 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Muhammad Ahsan, Thang X. Vu, Ilora Maity, Symeon Chatzinotas | VNF Mapping and Selective Handover for eMBB and mMTC Services in a LEO Satellite Network | 2026 | Early Access | Low earth orbit satellites Artificial satellites Aerospace and electronic systems Jamming Radio astronomy Antennas and propagation Central Processing Unit Electronic circuits Enhanced mobile broadband Handover 6G Network Slicing VNF mapping LEO Satellites VNF Handover eMBB mMTC | The integrated satellite-terrestrial networks (STNs) aim to provide global connectivity and support heterogeneous services, including enhanced mobile broadband (eMBB) and massive machine-type communication (mMTC). Each service request requires a series of virtual network functions (VNFs) to be deployed consecutively. The VNFs are mapped on nodes that constitute a path where a request is mapped. Provisioning multiple slices through satellite networks is challenging due to limited storage and computation resources. In addition, there are dynamic changes in the satellite positions that cause frequent variations in the topology. For requests lasting more than one time frame, a handover can be performed at the beginning of the next time frame. Handover implies overall reconfiguration, which induces significant computation costs in satellite networks. Therefore, in this article, we propose a path and VNF mapping strategy with selective handover while considering dynamic changes in the satellite topology and the limitation of available resources. We formulate a mathematical model based on Binary Integer Linear Programming (BILP), aiming to maximize the served requests. To reduce the time complexity of the model, we solve it using an iterative algorithm based on successive convex approximation (SCA). We refine the solution of the SCA after binary recovery using VNF and path mapping algorithm. For the mapped multi-frame requests in the current time frame, the resources are reserved in the nodes and links for the next time frame if the current routing path is available for the remaining duration of the request. The simulation results certify the performance of the proposed technique with a significant improvement in the served request percentage compared to previous works in the literature, while also reducing the number of handovers. | 10.1109/TNSM.2026.3687390 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Xinshuo Wang, Baihua Chen, Lei Liu, Yifei Li | Pisces: Fast Loss Recovery for Multipath Transmission in RDMA | 2026 | Early Access | Payloads Military aircraft Space technology Feeds System-on-chip Field programmable gate arrays Circuits Application specific integrated circuits Integrated circuits Feedback RDMA Loss Recovery Multipath Transmission Programmable Switch Programmable NIC FPGA | Conventional Remote Direct Memory Access (RDMA) relies on Priority Flow Control (PFC) to operate on lossless networks. However, as data centers scale, PFC’s drawbacks, such as head-of-line blocking and congestion spreading become increasingly problematic. This study proposes Pisces, a fast packet loss recovery scheme that leverages terminal–network collaboration. Instead of targeting lossless RDMA networks, Pisces enables high-throughput RDMA by efficiently handling loss recovery. To address the inefficient retransmission problems of PFC+Go-Back-N and the challenges of configuring appropriate timeouts for Selective Repeat (SR) in multipath transmission scenarios, Pisces implements Quick Drop Notification (QDN) of packet loss on switches, avoiding bandwidth waste and timeouts. In addition, Pisces RDMA NICs feature on-chip packet buffers to cache in-flight packets, supporting the scalability demands of RDMA in modern data centers. Upon receiving a QDN, lost packets are quickly retrieved from the buffer for retransmission, significantly improving retransmission efficiency and reducing PCIe bandwidth waste caused by cache replacements. This study overcame numerous challenges to implement Pisces prototype, which demonstrated excellent performance. Testbed experiments show that Pisces improves the 99th-percentile FCT by 130×compared to Mellanox CX-6. Large-scale simulations demonstrate that Pisces achieves a maximum reduction of 82.8% in the 99.9th-percentile FCT compared to SR and other state-of-the-art technologies. | 10.1109/TNSM.2026.3688038 |
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Li | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/k m2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Guisong Yang, Yechao Huang, Panxing Huang, Xingyu He | A Distributed SDN Controller-Based Computing Framework for Effective in-orbit Computing | 2026 | Early Access | Low earth orbit satellites Artificial satellites Aerospace and electronic systems Telemetry Antennas Antennas and propagation Central Processing Unit Software defined networking Computer networks Communication systems Task Scheduling Software Defined Network Satellite Network Placement of SDN Controller | The rapid development of Low Earth Orbit (LEO) satellite networks has made in-orbit computing more feasible, offering a solution for processing real-time, diverse user tasks. Compared with traditional cloud computing in ground cloud computing center, directly computing on the LEO satellite can significantly reduce task-processing delay. However, challenges remain, including the limited sensing and computing capabilities of satellites, high delays in processing task requests, and frequent switching of control domains due to the relative movement between LEO satellites and nodes in other orbits. To address these challenges and improve task management, computing is treated as a Virtual Network Function (VNF), managed by Software-Defined Networking (SDN) controllers. This paper proposes a distributed SDN controller-based computing framework, where task information is forwarded to SDN controllers, which then use a task scheduling strategy to allocate tasks to suitable computing nodes for processing. To support the implementation of this framework, we first propose a heuristic SDN controller placement strategy that uses a tiling method to divide the LEO satellite network into SDN control domains and places the controller at the midpoint of each domain Then, we propose a Double Deep Q-Network (DDQN) algorithm for in-orbit task scheduling, which adaptively optimizes task scheduling strategy to minimize task-processing delay and ensure a high task completion rate. Finally, Simulations are conducted in two parts to evaluate the framework. The first part validates the DDQN-based task scheduling strategy, achieving significant reductions in task-processing delay and improved task completion rates compared to conventional strategies. The second part assesses the impact of SDN control domain shape and size on task-processing delay, confirming domain size as the dominant factor influencing delay. | 10.1109/TNSM.2026.3685308 |
| Yu Gu, Le Zhang, Yunyi Zhang, Ye Du | SatFedGuard: Semi-Supervised Federated Contrastive Learning with RL-Assisted Bidirectional Distillation for Anomaly Traffic Detection in Satellite Networks | 2026 | Early Access | Low earth orbit satellites Artificial satellites Payloads Jamming Electronic warfare Feeds Broadcasting Broadcast technology Filtering Filters Federated Learning Satellite Network Intrusion Detection Semi-Supervised Learning Edge-Cloud Collaboration | Federated learning-based intrusion detection methods for satellite networks enable model training without sharing local data, thereby ensuring network security while significantly reducing communication overhead. However, due to the difficulty of obtaining large-scale high-quality labeled data in satellite environments, a key challenge lies in how to train intrusion detection models using abundant unlabeled traffic data. We propose SatFedGuard, a semi-supervised federated contrastive learning approach for anomaly traffic detection in satellite networks. SatFedGuard effectively integrates unlabeled in-orbit data with labeled data from ground stations for model training. First, it models the unlabeled satellite traffic data using a contrastive learning framework. To address the challenge of non-IID data distribution, an attention-based dual-path aggregation strategy is designed to generate personalized models for each satellite by leveraging model similarities. Then, a bidirectional multi-granularity distillation method between larger and smaller models is implemented, where reinforcement learning is employed to optimize the weights of different loss terms dynamically. Experiments on two satellite network traffic datasets under non-IID settings demonstrate that the proposed method significantly improves anomaly detection performance while reducing dependence on in-orbit labeled data, achieving F1-Scores of 93.38% (↑11.63%) and 99.80% (↑8.72%), respectively. | 10.1109/TNSM.2026.3685416 |
| Xi Liu, Jun Liu, Weidong Li | Strategy-Proof Cost-Sharing Mechanism for Dynamic Adaptability Service in Vehicle Computing | 2026 | Vol. 23, Issue | Costs Sensors Vehicle dynamics Computational modeling Adaptation models Resource management Intelligent vehicles Edge computing Mobile computing Connected vehicles Vehicle computing dynamic adaptability service cost sharing strategy-proof | Vehicle computing has emerged as a promising paradigm for delivering time-sensitive computing services to Internet of Things applications. Intelligent vehicles (IVs) offer onboard computing and sensing capabilities for delivering a wide range of services. In this paper, we propose a dynamic adaptability service model that leverages the swift mobility of vehicles to adjust the distribution of IVs to users’ dynamically changing locations. There are two types of areas in our model: the user area and the parking area. The former is where services are provided, while the latter serves as the preparation zone for backup IVs. IVs in the parking area are dispatched to service areas, where existing vehicle resources cannot meet user demand, and they return to the parking area after delivering the service. Multiple users share sensing resources, and our model allocates the costs among them. To ensure strategy-proofness, we introduce the concepts of no additional cost and allocation stability. We propose a strategy-proof cost-sharing mechanism for dynamic adaptability service. The proposed mechanism achieves no positive transfers, voluntary participation, individual rationality, consumer sovereignty, budget balance, no additional costs, and allocation stability. Moreover, the proposed mechanism’s approximation performance is analyzed. We further use comprehensive simulations to verify the effectiveness and efficiency of the proposed mechanism. | 10.1109/TNSM.2025.3646778 |
| Xiuqin Xu, Mingwei Lin, Zeshui Xu, Xin Luo | A Sampling-Neighborhood-Regularized Latent Factorization of Tensor for Dynamic QoS Estimation | 2026 | Vol. 23, Issue | Quality of service Tensors Estimation Accuracy Vectors Data models Linear programming Analytical models Adaptation models Web services Dynamic latent factor analysis of tensor high-dimensional and incomplete (HDI) data sampling-neighborhood regularization learning temporal pattern industrial application | Since similar users frequently exhibit similar Quality of Service (QoS) when accessing similar services, effectively capturing neighborhood information hidden in QoS data becomes critical for latent factorization of tensor (LFT)-based QoS estimators. Current LFT models either calculate the complete set of neighborhoods or do not consider neighborhoods, resulting in a rapid rise in model complexity and poor estimation accuracy. Moreover, not every neighbor in the neighborhood set is beneficial to the user/service entity. To address these limitations, this study proposes a sampling-neighborhood-regularized latent factorization of tensor (SNLFT) model with three key ideas: 1) extracting primal latent factors (LFs), which are obtained to express related entities on the basis of high-dimensional and incomplete QoS data; 2) constructing the sampling-neighborhood set, which is acquired using the Gibbs sampling to reflect the similarities between the primal LF vectors of entities over time; 3) developing a sampling-neighborhood-regularized LFT model, where all the sampling neighborhoods of entities and $L_{2}$ -norm of desirable LFs are employed to regularize the objective function. Extensive experiments on eight dynamic QoS datasets demonstrate that SNLFT significantly outperforms state-of-the-art models in both estimation accuracy and computational efficiency. | 10.1109/TNSM.2025.3644937 |
| Jiahe Xu, Jing Fu, Bige Yang, Zengfu Wang, Jingjin Wu, Xinyu Wang, Moshe Zukerman | Network Slicing in MEC-Based RANs With Nonlinear Cost Rate Functions | 2026 | Vol. 23, Issue | Resource management Costs Network slicing Optimization Radio access networks Servers Quality of service Multi-access edge computing 5G mobile communication Terminology Edge slicing stochastic modeling EBIT MEC | This paper addresses network slicing in a large-scale Multi-Access Edge Computing (MEC)-enabled Radio Access Network (RAN) comprising heterogeneous edge nodes with varying computing and storage resource capacities. These resources are dynamically allocated to slice requests and released when the service of a slice request is completed. Our objective is to optimize the resource allocation for each admitted arriving slice request, considering its demands for computing and storage resources, to maximize the long-run average Earning Before Interest and Taxes (EBIT) of the MEC slicing system. We formulate the optimization problem as a Restless Multi-Armed Bandit (RMAB)-based resource allocation problem with a nonlinear cost rate function. To solve this, we introduce a new policy called Prioritizing-the-Future-Approximated earning per request (PFA) where for each admitted slice request, we always prioritize the allocation of the resource combination that gives the highest achievable earning, considering the future effects of this allocation. PFA is designed to be scalable and applicable to large-scale networks. We numerically demonstrate the superior performance of PFA in maximizing long-run average EBIT through simulations, comparing it with two baseline policies, at various cases of parameter values. Moreover, our findings offer insights for network operators in resource allocation policy selection. | 10.1109/TNSM.2025.3646478 |