Last updated: 2026-05-09 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Jiale Zhu, Xiaoyao Zheng, Shukai Ye, Ming Zheng, Liping Sun, Liangmin Guo, Qingying Yu, Yonglong Luo | Federated Recommendation Model Based on Personalized Attention and Privacy-Preserving Dynamic Graph | 2026 | Early Access | Graph Neural Networks (GNNs) have been widely adopted in recommendation systems. When integrated into a federated learning framework, GNNs can enhance the model’s expressive capability. However, challenges arise in personalized representation and graph expansion due to the heterogeneity and locality of user data in federated recommendation systems. To address these challenges, we propose a federated recommendation model based on personalized attention and privacy-preserving dynamic graphs. The method first matches neighbor users for each selected client. Subsequently, it counts the interaction frequencies of items for both local and neighbor users to construct personalized weights, which captures the unique characteristics of different users. Additionally, we designs a method for constructing privacy-preserving dynamic graphs. In each round of federated training, the selected client adds pseudo-interaction items to its own interaction subgraph, perturbing the real interactions. After completing local training, the noisy interaction subgraph is incorporated into the global graph to capture higher-order connectivity information among users while safeguarding their interaction privacy. We conduct extensive experiments on three benchmark datasets, and the results demonstrate that the proposed PADG method achieves superior performance while effectively protecting privacy. | 10.1109/TNSM.2026.3691659 | |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Qian Guo, Chunyu Zhang, Xue Xiao, Min Zhang, Zhuo Liu, Danshi Wang | Knowledge-Distilled Time-Series LLM for General Performance Parameter Prediction in Optical Transport Networks | 2026 | Early Access | Optical fibers Optical waveguides Feeds Network-on-chip Communication systems Internet of Things Optical fiber communication Optical fiber networks Telecommunications Quality of transmission Optical transport networks (OTNs) general performance parameter prediction time-series large language models knowledge distillation | In optical transport networks (OTNs), proactive and accurate prediction of key performance parameters plays a crucial role in identifying potential failure of OTN equipment and guiding timely operational interventions, reducing downtime and improving overall system performance. However, the performance parameters in OTNs are complex and diverse. The reliance of existing models structure design on specific configurations limits generalizability across diverse equipment types. Moreover, the high computational resource consumption and memory footprints of these models may lead to inefficiency while hindering practical application and large-scale deployment. To address these challenges, this paper presents a general model, KD-TimeLLM, a cross-application of TimeLLM into OTN failure management, for performance parameter prediction of multiple equipment types in OTNs. By learning from its teacher model TimeLLM via a knowledge distillation strategy, KD-TimeLLM can achieve generalizability in performance parameter prediction while enhancing efficiency. We conducted evaluations across multiple metrics using data sets from different operators and various board types. Results show that KD-TimeLLM outperforms other models in predictive effects including the lowest MSE and MAE across all types of board data along with a scaled_RMSE value below 0.5, the varying number of performance parameters, and zero-shot prediction capability, highlighting its generalizability. Moreover, compared to its teacher model, KD-TimeLLM achieves comparable predictive effects with a significant reduction 99.99% in model parameters and an average reduction of 99.23% in inference time across eight different types of board data. Furthermore, compared to a multiple-model system, total inference time and memory footprint of KD-TimeLLM decreased by 94.79% and 89.65%, highlighting its effectiveness and efficiency. | 10.1109/TNSM.2026.3686811 |
| Shahid Mahmood, Moneeb Gohar, Seok Joo Koh | Globally Integrated Trust Authority (GITA) for Resource-Constrained Edge Devices in IoT and 6G | 2026 | Early Access | Payloads Filtering Central Processing Unit Filters Feedback Circuits Electronic circuits Microcontrollers Circuits and systems Microprocessors GITA Globally Integrated Trust Authority Network PKDL TSL LMS Security Trust Management Resource Constrained Edge Device Internet of Things and Cyber-Attack | The rapid growth of the Internet and the increasing number of edge devices have expanded the cyber-attack surface at the edge layer. Hackers exploit vulnerabilities at various levels of a network by either directly connecting to it or accessing it over the Internet. In both scenarios, edge devices remain a primary target due to their widespread use, limited resources and critical impact. Therefore, securing edge devices is essential to counter both local and global cyber threats. Trust is a key factor in determining the level of protection required for edge devices. It can be used to assess the reliability of other devices before offering or requesting services. Since edge devices are often globally interconnected, trust levels should be verifiable across the Internet and intranet. In this paper, we propose the Globally Integrated Trust Authority (GITA), a framework that distributes verifiable trust values across networks and the Internet while minimizing communication overhead. Experimental results demonstrate that GITA improves the efficiency of trust value distribution and verification among nodes compared to digital certificates, while maintaining the same level of protection.. This approach enables effective identification of malicious and benign nodes, enhancing the precision of malicious node detection locally and globally. | 10.1109/TNSM.2026.3687967 |
| Xinshuo Wang, Baihua Chen, Lei Liu, Yifei Li | Pisces: Fast Loss Recovery for Multipath Transmission in RDMA | 2026 | Early Access | Payloads Military aircraft Space technology Feeds System-on-chip Field programmable gate arrays Circuits Application specific integrated circuits Integrated circuits Feedback RDMA Loss Recovery Multipath Transmission Programmable Switch Programmable NIC FPGA | Conventional Remote Direct Memory Access (RDMA) relies on Priority Flow Control (PFC) to operate on lossless networks. However, as data centers scale, PFC’s drawbacks, such as head-of-line blocking and congestion spreading become increasingly problematic. This study proposes Pisces, a fast packet loss recovery scheme that leverages terminal–network collaboration. Instead of targeting lossless RDMA networks, Pisces enables high-throughput RDMA by efficiently handling loss recovery. To address the inefficient retransmission problems of PFC+Go-Back-N and the challenges of configuring appropriate timeouts for Selective Repeat (SR) in multipath transmission scenarios, Pisces implements Quick Drop Notification (QDN) of packet loss on switches, avoiding bandwidth waste and timeouts. In addition, Pisces RDMA NICs feature on-chip packet buffers to cache in-flight packets, supporting the scalability demands of RDMA in modern data centers. Upon receiving a QDN, lost packets are quickly retrieved from the buffer for retransmission, significantly improving retransmission efficiency and reducing PCIe bandwidth waste caused by cache replacements. This study overcame numerous challenges to implement Pisces prototype, which demonstrated excellent performance. Testbed experiments show that Pisces improves the 99th-percentile FCT by 130×compared to Mellanox CX-6. Large-scale simulations demonstrate that Pisces achieves a maximum reduction of 82.8% in the 99.9th-percentile FCT compared to SR and other state-of-the-art technologies. | 10.1109/TNSM.2026.3688038 |
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/km2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Frequency modulation Radio broadcasting Filtering Filters Memory modules Virtual private networks Encrypted traffic classification Class incremental learning Neural collapse | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Aerospace and electronic systems Telemetry Central Processing Unit Microcontrollers Microprocessors MIMICs Millimeter wave integrated circuits Monolithic integrated circuits Communication systems Internet of Things EDHOC OSCORE Key agreement Authentication extended Canetti–Krawczyk (eCK) attack model | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 |
| Lal Verda Cakir, Mehmet Ali Erturk, Mehmet Ozdem, Berk Canberk | Digital Twin-assisted Handover Scheme for Mobile Networks using Generative AI | 2026 | Early Access | Electromagnetic propagation Propagation constant Radio broadcasting Radio networks Handover Communication systems Avatars Communication switching Data transfer Cellular networks digital twin 5G/6G handover management generative artificial intelligence | Handover management in mobile networks is challenged by high latency and reduced reliability in dense deployments and under user mobility. Here, existing schemes improve handover initiation by optimising the candidate handover at the decision time. However, these are applied after a non-negligible delay due to the control-plane signalling. Then, when applied, it may become invalid or degrade performance. To address this, we propose a Digital Twin (DT)-assisted handover scheme that performs predictive execution-time validation prior to the preparation of the Next Generation (NG)-based handover. To this end, the DT-What-If Generator (DT-WIG) is used to emulate short-horizon future network states under uncertainty. Here, the DT-WIG is a spatiotemporal graph generative model that uses variational latent sampling to generate counterfactual post-handover trajectories for the candidate handover decision. Then, the AMF estimates the failure and QoS risks associated with the candidate handover and approves/rejects it via standard-compliant signalling. With this, we form a policy-agnostic mechanism that runs on the underlying handover policy. Consequently, we evaluate performance using ns-3/5G-LENA trace generation and replay-based policy analysis, with OpenAirInterface-based signalling evaluation. The results show that the proposed method reduces the handover failure rate and handover interruption time while improving latency, jitter, throughput, and packet loss. | 10.1109/TNSM.2026.3690572 |
| Willie Kouam, Yezekael Hayel, Gabriel Deugoué, Charles Kamhoua | Decoy Allocation against Lateral Movement - A Network Centrality Game Approach | 2026 | Early Access | Circuits Feedback Network topology Reconnaissance Communication systems Radio access networks Regional area networks Routing Military communication Computer networks Lateral movement Cyber deception Centrality measure One-sided POSGs | Targeted incidents increasingly threaten internal security with a rise in data breaches and service disruptions. Attackers now employ sophisticated approaches, infiltrating systems for ongoing access to critical information through lateral movement. Detecting and defending against such intrusions is challenging due to commonly exploited specific vulnerabilities. Consequently, various deception techniques have emerged over time, aiming to divert attackers’ attention. In our scenario, attackers use lateral movement within the network to reach a specific target, while defenders strategically deploy decoys to counteract them. Such a dynamic and adversarial interaction is modeled as a one-sided partially observable stochastic game (OS-POSG). Several solutions have been proposed to address this challenge, particularly when the attacker possesses complete knowledge of the network’s topology through the recognition stage. Simultaneously, recent years saw the development of approaches to obscure the attackers’ reconnaissance phase, compelling them to operate without a full comprehension of the network’s structure. We therefore introduce an innovative methodology involving intelligent players who take into account the importance of network devices, assessed by centrality measures, during the decision-making process. This approach aims to improve the effectiveness of the defender’s strategy to counter the attacker’s lateral movement in the network, in the context of step-by-step optimization. | 10.1109/TNSM.2026.3689344 |
| Xiuqin Xu, Mingwei Lin, Zeshui Xu, Xin Luo | A Sampling-Neighborhood-Regularized Latent Factorization of Tensor for Dynamic QoS Estimation | 2026 | Vol. 23, Issue | Quality of service Tensors Estimation Accuracy Vectors Data models Linear programming Analytical models Adaptation models Web services Dynamic latent factor analysis of tensor high-dimensional and incomplete (HDI) data sampling-neighborhood regularization learning temporal pattern industrial application | Since similar users frequently exhibit similar Quality of Service (QoS) when accessing similar services, effectively capturing neighborhood information hidden in QoS data becomes critical for latent factorization of tensor (LFT)-based QoS estimators. Current LFT models either calculate the complete set of neighborhoods or do not consider neighborhoods, resulting in a rapid rise in model complexity and poor estimation accuracy. Moreover, not every neighbor in the neighborhood set is beneficial to the user/service entity. To address these limitations, this study proposes a sampling-neighborhood-regularized latent factorization of tensor (SNLFT) model with three key ideas: 1) extracting primal latent factors (LFs), which are obtained to express related entities on the basis of high-dimensional and incomplete QoS data; 2) constructing the sampling-neighborhood set, which is acquired using the Gibbs sampling to reflect the similarities between the primal LF vectors of entities over time; 3) developing a sampling-neighborhood-regularized LFT model, where all the sampling neighborhoods of entities and $L_{2}$ -norm of desirable LFs are employed to regularize the objective function. Extensive experiments on eight dynamic QoS datasets demonstrate that SNLFT significantly outperforms state-of-the-art models in both estimation accuracy and computational efficiency. | 10.1109/TNSM.2025.3644937 |
| Jan Luxemburk, Karel Hynek, Richard Plný, Tomáš Čejka | Universal Embedding Function for Traffic Classification via QUIC Domain Recognition Pretraining: A Transfer Learning Success | 2026 | Vol. 23, Issue | Transfer learning Training Cryptography Adaptation models Feature extraction Standards Payloads Protocols Pipelines Data augmentation Traffic classification transfer learning deep learning encrypted traffic QUIC | Encrypted traffic classification (TC) methods must adapt to new protocols and extensions as well as to advancements in other machine learning fields. In this paper, we adopt a transfer learning setup best known from computer vision. We first pretrain an embedding model on a complex task with a large number of classes and then transfer it to seven established TC datasets. The pretraining task is recognition of SNI domains in encrypted QUIC traffic, which in itself is a challenge for network monitoring due to the growing adoption of TLS Encrypted Client Hello. Our training pipeline—featuring a disjoint class setup, ArcFace loss function, and a modern deep learning architecture—aims to produce universal embeddings applicable across tasks. A transfer method based on model fine-tuning surpassed SOTA performance on nine of ten downstream TC tasks, with an average improvement of 6.4%. Furthermore, a comparison with a baseline method using raw packet sequences revealed unexpected findings with potential implications for the broader TC field. We released the model architecture, trained weights, and codebase for transfer learning experiments. | 10.1109/TNSM.2025.3642984 |
| Xingqi Wu, Junaid Farooq, Juntao Chen | Multi-Agent Resource Orchestration Based on D3QN for Network Slicing in 5G Edge-Cloud Networks | 2026 | Vol. 23, Issue | Resource management Dynamic scheduling Network slicing Ultra reliable low latency communication Costs Training Topology Computational modeling Servers Real-time systems 5G network slicing resource orchestration MARL micro-services | Optimizing resource orchestration in network slicing is essential for the performance of diverse applications in 5G edge-cloud networks. This paper introduces a novel approach utilizing multi-agent reinforcement learning (MARL) with a dueling double deep Q-network (D3QN) to efficiently manage dynamic resource provisioning to the different traffic flows. We model a network slicing environment with applications generating stochastic resource demands, simulating real-world virtual network patterns over physical infrastructure. Our MARL-based scheme adapts to the varying needs of traffic flows, balancing compute and memory resource allocation under limited information. Comparative analysis demonstrates the superiority of our approach over traditional static methods, particularly for ultra-reliable low-latency communication (URLLC) traffic flows, by minimizing latency and enhancing resource efficiency. The effectiveness of the proposed framework is validated through extensive simulations, which demonstrate up to 45% higher average utility for URLLC traffic flows and 18% improvement in overall resource efficiency compared with baseline strategies. These results confirm that the framework can simultaneously ensure stringent service requirements and enhance system-wide performance in Next-Generation networks. | 10.1109/TNSM.2025.3643340 |
| Tran Viet Khoa, Mohammad Abu Alsheikh, Yibeltal F. Alem, Dinh Thai Hoang | Balancing Security and Accuracy: A Novel Federated Learning Approach for Cyberattack Detection in Blockchain Networks | 2026 | Vol. 23, Issue | Blockchains Noise Accuracy Charge coupled devices Federated learning Cyberattack Privacy Differential privacy Deep learning Servers Privacy-preserving federated learning Gaussian noise Laplace noise MA noise | This paper presents a novel Collaborative Cyberattack Detection (CCD) system aimed at enhancing the security of blockchain-based data-sharing networks by addressing the complex challenges associated with noise addition in federated learning models. Leveraging the theoretical principles of differential privacy, our approach strategically integrates noise into trained sub-models before reconstructing the global model through transmission. We systematically explore the effects of various noise types, i.e., Gaussian, Laplace, and Moment Accountant, on key performance metrics, including attack detection accuracy, deep learning model convergence time, and the overall runtime of global model generation. Our findings reveal the intricate trade-offs between ensuring data privacy and maintaining system performance, offering valuable insights into optimizing these parameters for diverse CCD environments. Through extensive simulations, we provide actionable recommendations for achieving an optimal balance between data protection and system efficiency, contributing to the advancement of secure and reliable blockchain networks. | 10.1109/TNSM.2025.3644415 |
| Koki Koshikawa, Yue Su, Jong-Deok Kim, Won-Joo Hwang, Zhetao Li, Kien Nguyen, Hiroo Sekiya | Impacts of Overlay Topologies and Peer Selection on Latencies in IoT Blockchain | 2026 | Vol. 23, Issue | Peer-to-peer computing Blockchains Topology Internet of Things Network topology Delays Security Reliability Overlay networks Propagation delay Ethereum overlay P2P proof-of-authority peer selection latency | The integration of blockchain with the Internet of Things (IoT) offers strong guarantees of data integrity and decentralized trust; however, latency remains a critical barrier to scalability. Under Ethereum’s default random peering, IoT deployments exhibit propagation delays ranging from 500 ms to 1000 ms, causing stale blocks and inconsistent state updates. This paper investigates the impact of peer-to-peer (P2P) overlay topologies on latency performance and introduces a lightweight peer-selection algorithm, Dual Perigee, designed to jointly optimize transaction-oriented latency (TOL) and block-oriented latency (BOL). We first develop a method to construct canonical overlay configurations (i.e., Erdős-Rényi, Barabási-Albert, and Random-Regular) and evaluate their influence on latency in a controlled IoT-blockchain environment. Experimental results reveal that static topologies fail to consistently minimize delay due to redundant message amplification and queuing effects. To address this, Dual Perigee extends the state-of-the-art Perigee algorithm by incorporating block propagation metrics into its scoring function while maintaining low computational overhead. In a 50-node Proof-of-Authority network emulated on Mininet-Wifi, Dual Perigee reduces TOL by up to 54.7% and BOL by 48.5% compared to Ethereum’s default peering, and outperforms Perigee by up to 23.4% in BOL. These findings demonstrate that latency-aware peer selection is essential for achieving responsive and scalable IoT-blockchain systems under dynamic network conditions. | 10.1109/TNSM.2025.3645139 |
| Yali Yuan, Ruolin Ma, Jian Ge, Guang Cheng | Robust and Invisible Flow Watermarking With Invertible Neural Network for Traffic Tracking | 2026 | Vol. 23, Issue | Watermarking Decoding Feature extraction Correlation Robustness Encoding Delays Encryption Data mining Accuracy Flow watermarking inter-packet delay INN invisibility robustness | This paper introduces an innovative blind flow watermarking framework on the basis of Invertible Neural Network (INN) called IFW, which aims to solve the problem of suboptimal encoder-decoder coupling in existing end-to-end watermarking architectures. The framework tightly couples the encoder and decoder to achieve highly consistent feature mapping using the same parameters, thus effectively avoiding redundant feature embedding. In addition, this paper adopts the INN to implement watermarking, which supports forward encoding and backward decoding, and the watermark extraction is completely dependent on the embedding algorithm without the need for the original network flow. This feature enables both the embedding and the blind extraction of watermarks simultaneously. Extensive experiments demonstrate that the proposed IFW method achieves a watermark extraction accuracy exceeding 96.6% and maintains a stable K-S test p-value above 0.85 in both simulated and real-world Tor traffic environments. These results indicate a clear advantage over mainstream baselines, highlighting the method’s ability to jointly ensure robustness and invisibility, as well as its strong potential for real-world deployment. | 10.1109/TNSM.2025.3645079 |
| Zilong Jin, Xin Zhang, Jian Su, Lejun Zhang, Jian Shen | Subgraph-Driven Lightweight Federated Learning for Spatiotemporal Cellular Traffic Prediction | 2026 | Vol. 23, Issue | Predictive models Spatiotemporal phenomena Data models Computational modeling Adaptation models Training Federated learning Correlation Accuracy Costs Cellular traffic prediction federated learning graph neural network | The rapid expansion of mobile communication networks has led to a surge in cellular traffic, highlighting the need for advanced prediction models to improve network performance. Federated learning (FL) offers a promising solution by enabling distributed model training across multiple nodes, aligning well with the decentralized nature of modern networks. However, applying FL to spatiotemporal cellular traffic prediction is challenging due to the substantial communication overhead in distributed learning. To address this, we propose LFedSG, a lightweight FL framework incorporating subgraph partitioning for spatiotemporal traffic prediction. LFedSG supports collaborative training while preserving inter-client dependencies critical for accurate prediction. Communication efficiency is achieved by focusing on essential model parameters, while subgraph partitioning and spatiotemporal graph convolutional networks (STGCN) enhance spatial and temporal correlation modeling. An adaptive transmission weight pruning strategy further reduces communication and computation costs. Extensive experiments on the Telecom Italia and Pems07 datasets demonstrate that LFedSG achieves higher predictive accuracy than traditional methods, with significant reductions in communication overhead and training time, validating its effectiveness and scalability for large-scale mobile network environments. | 10.1109/TNSM.2025.3645253 |