Last updated: 2026-05-14 05:01 UTC
All documents
Number of pages: 163
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Shaimaa Alkaabi, Mark A Gregory, Shuo Li | A Stateless Orchestrated Handover Protocol for Multi-Access Edge Computing | 2026 | Early Access | In Multi-access Edge Computing (MEC) environments, session continuity during user mobility remains a pressing challenge due to decentralized infrastructure and high-throughput, latency-sensitive applications. Existing mobility protocols often rely on stateful mechanisms or centralized control, leading to increased signaling overhead, limited scalability, and vulnerability to performance degradation in dynamic networks. This paper introduces the Server Search and Select Algorithm Protocol (SSSAP), a lightweight, UDP-based handover protocol tailored for MEC deployments. The protocol is an extension of our previous work on a handover Server Search and Selection Algorithm (SSSA). SSSAP enables seamless session redirection through a three-phase signaling scheme (pre-handover, handover initiation, and handover termination), preserving service continuity without coupling session state to transport layers. The protocol’s design features extensible headers for multi-metric evaluation and future security adaptation while maintaining minimal dependency on intermediary control nodes. Through extensive simulation and testing, we have validated the SS-SAP efficiency across user equipment nodes and MEC servers. Results demonstrate high handover success rates, low-session setup delays, and balanced server load distribution. SSSAP achieves superior performance in mobility robustness, packet loss mitigation, and integration simplicity. The research outcomes position SSSAP as a scalable and application-agnostic mobility protocol for MEC systems, especially in vehicular and high-mobility scenarios. | 10.1109/TNSM.2026.3692555 | |
| Qin Zeng, Dan Qu, Hao Zhang, Yaqi Chen | Neural Collapse-Based Class-Incremental Learning for Encrypted Traffic Classification | 2026 | Early Access | Payloads Military aircraft Space technology Feeds Frequency modulation Radio broadcasting Filtering Filters Memory modules Virtual private networks Encrypted traffic classification Class incremental learning Neural collapse | The rapid evolution of internet technologies has intensified network traffic dynamics due to the emergence of novel encryption protocols, posing significant challenges to traffic classification. Incremental learning, which enables continuous adaptation to emerging tasks, has emerged as a promising approach to enhance the sustainability of encrypted traffic classification. However, existing methods fail to address the substantial feature representation disparities across incremental tasks, resulting in suboptimal model adaptability. Inspired by the Neural Collapse (NC) phenomenonwhich reveals that deep neural networks’ final-layer features collapse to class-mean vectors forming a Simplex Equiangular Tight Frame (ETF) with classifier weights, thereby constituting an optimal geometric structure for classification taskswe propose NCIL-ETC, a Neural Collapse-based Incremental Learning framework for Encrypted Traffic Classification. Our approach employs a pretrained Mamba as the feature extraction backbone, leveraging its linear-complexity computational properties to significantly reduce resource overhead. Simultaneously, we introduce a preallocated ETF classifier that establishes an optimal classification structure covering observed classes. Through feature-classifier alignment constraints during incremental learning, our method promotes both new and historical class features to converge toward ETF vertices, thereby preserving globally optimal category relationships. Extensive experimental evaluations on four public benchmarks demonstrate that NCIL-ETC achieves state-of-the-art performance, surpassing baseline methods in both classification accuracy and incremental learning capability. | 10.1109/TNSM.2026.3688767 |
| Songshou Dong, Yanqing Yao, Huaxiong Wang, Yining Liu | LCMS: Efficient Lattice-based Conditional Privacy-preserving Multi-receiver Signcryption Scheme for Internet of Vehicles | 2026 | Early Access | Optical waveguides Optical fibers Broadcasting Broadcast technology Oscillators Circuits Feedback Circuits and systems Internet of Vehicles Communication systems Internet of Vehicles signcryption weak unlinkable certificateless revocable multi-receiver distributed decryption | Internet of Vehicles (IoV) requires robust security and privacy protection mechanisms to enable trusted traffic information exchange, while also requiring low communication and low computing overhead to meet the real-time requirements of IoV. Existing signcryption schemes suffer from quantum vulnerability, inadequate unlinkability/vehicle anonymity, absence of revocability, poor scalability, inadequate management of malicious entities, and high communication and computational overhead. So we propose an efficient lattice-based conditional privacy-preserving multi-receiver signcryption scheme (LCMS) that systematically addresses these gaps through three core innovations: 1) Privacy preservation is achieved via a pseudonym mechanism integrated with certificateless key generation, which ensures vehicle anonymity and weak unlinkability while preventing malicious key generation center and key escrow; 2) Malicious entity management through dynamic revocability and distributed decryption among roadside units, preventing unilateral message access; and 3) Post-quantum efficiency is achieved by leveraging the Learning With Rounding problem to eliminate expensive Gaussian sampling, combined with ciphertext packing techniques. This reduces time overhead, the size of signcryptexts, and communication overhead, while lowering the overall storage overhead of the scheme through the MP12 trapdoor. Security proofs show LCMS achieves Existential Unforgeability under Adaptive Identity Chosen-Message Attack and Indistinguishability under Adaptive Identity Chosen-Ciphertext Attack in the Random Oracle Model, with rigorously validated resistance against multiple IoV-specific attacks. Experimental results via SageMath implementation demonstrate that our scheme exhibits a smaller signcryptext size and lower signcryption/unsigncryption time compared to existing random lattice-based signcryption schemes. Scalability tests with 300 vehicles and 300 roadside units (RSUs) were completed within 230 seconds. Communication overhead analysis confirms practical feasibility for IEEE 802.11p vehicle communication protocol, and RSU serving capability evaluation under realistic vehicle density (100–200/km2) and speed (40–60 km/h) further validates system practicality. LCMS provides a quantum-resistant, privacy-preserving, and efficient solution for production IoV. | 10.1109/TNSM.2026.3688507 |
| Lal Verda Cakir, Mehmet Ali Erturk, Mehmet Ozdem, Berk Canberk | Digital Twin-assisted Handover Scheme for Mobile Networks using Generative AI | 2026 | Early Access | Electromagnetic propagation Propagation constant Radio broadcasting Radio networks Handover Communication systems Avatars Communication switching Data transfer Cellular networks digital twin 5G/6G handover management generative artificial intelligence | Handover management in mobile networks is challenged by high latency and reduced reliability in dense deployments and under user mobility. Here, existing schemes improve handover initiation by optimising the candidate handover at the decision time. However, these are applied after a non-negligible delay due to the control-plane signalling. Then, when applied, it may become invalid or degrade performance. To address this, we propose a Digital Twin (DT)-assisted handover scheme that performs predictive execution-time validation prior to the preparation of the Next Generation (NG)-based handover. To this end, the DT-What-If Generator (DT-WIG) is used to emulate short-horizon future network states under uncertainty. Here, the DT-WIG is a spatiotemporal graph generative model that uses variational latent sampling to generate counterfactual post-handover trajectories for the candidate handover decision. Then, the AMF estimates the failure and QoS risks associated with the candidate handover and approves/rejects it via standard-compliant signalling. With this, we form a policy-agnostic mechanism that runs on the underlying handover policy. Consequently, we evaluate performance using ns-3/5G-LENA trace generation and replay-based policy analysis, with OpenAirInterface-based signalling evaluation. The results show that the proposed method reduces the handover failure rate and handover interruption time while improving latency, jitter, throughput, and packet loss. | 10.1109/TNSM.2026.3690572 |
| Awaneesh Kumar Yadav, Madhusanka Liyanage, An Braeken | An Improved and Provably Secure EDHOC Protocol Supporting the Extended Canetti–Krawczyk (eCK) Security Model | 2026 | Early Access | Aerospace and electronic systems Telemetry Central Processing Unit Microcontrollers Microprocessors MIMICs Millimeter wave integrated circuits Monolithic integrated circuits Communication systems Internet of Things EDHOC OSCORE Key agreement Authentication extended Canetti–Krawczyk (eCK) attack model | Transport Layer Security (TLS) is considered to be the most used standard security protocol for the Internet of Things (IoT). However, as TLS was originally designed for computer networks, it is not optimal with respect to efficiency. Therefore, a new protocol called Object Security for Constrained RESTful Environments (OSCORE) has been standardized for securing constrained devices. Currently, the Ephemeral Diffie Hellman Over COSE (EDHOC) protocol, which is a key exchange protocol to define a session key used in OSCORE, is also in the process of being standardized. This paper shows that the four authentication modes of the EDHOC protocol are vulnerable in the extended Canetti–Krawczyk (eCK) security model, which is a common security model used in IoT. In addition, also resistance to Distributed Denial of Service (DDoS) attacks is weak. Taking this into account, we propose two new variants of EDHOC. The first variant, EDHOC2, is able to overcome both issues but has a slightly higher cost for communication, computation, storage, and energy consumption. The second variant, EDHOC3, offers only additional protection in the eCK security model and has, on average, similar, even better performance in one authentication mode, compared to EDHOC. Additionally, the Real-Or-Random (ROR) logic and Scyther validation tool are employed to ensure the security of the designed variants. Furthermore, a prototype implementation is conducted to demonstrate the real-time deployment of the designed versions. | 10.1109/TNSM.2026.3690530 |
| Jiale Zhu, Xiaoyao Zheng, Shukai Ye, Ming Zheng, Liping Sun, Liangmin Guo, Qingying Yu, Yonglong Luo | Federated Recommendation Model Based on Personalized Attention and Privacy-Preserving Dynamic Graph | 2026 | Early Access | Modeling Federated learning Privacy Recommender systems Training Educational institutions Servers Algorithms Conferences Graph neural networks Graph Neural Networks Federated Learning Personalized Recommendation Privacy Protection | Graph Neural Networks (GNNs) have been widely adopted in recommendation systems. When integrated into a federated learning framework, GNNs can enhance the model’s expressive capability. However, challenges arise in personalized representation and graph expansion due to the heterogeneity and locality of user data in federated recommendation systems. To address these challenges, we propose a federated recommendation model based on personalized attention and privacy-preserving dynamic graphs. The method first matches neighbor users for each selected client. Subsequently, it counts the interaction frequencies of items for both local and neighbor users to construct personalized weights, which captures the unique characteristics of different users. Additionally, we designs a method for constructing privacy-preserving dynamic graphs. In each round of federated training, the selected client adds pseudo-interaction items to its own interaction subgraph, perturbing the real interactions. After completing local training, the noisy interaction subgraph is incorporated into the global graph to capture higher-order connectivity information among users while safeguarding their interaction privacy. We conduct extensive experiments on three benchmark datasets, and the results demonstrate that the proposed PADG method achieves superior performance while effectively protecting privacy. | 10.1109/TNSM.2026.3691659 |
| Atri Mukhopadhyay, Dinesh Korukonda, Goutam Das | Design of Passive Optical Network Based O-RAN X-haul: A Systematic Approach | 2026 | Early Access | Timing Passive optical networks Optimization Delays Optical network units Ethernet Jitter Loading Copper Synchronization C-RAN Delay Jitter QCQP O-RAN PON | The development of high data rate communication technologies has resulted in cell densification, which in turn has led to the development of centralized radio access networks (C-RANs) followed by open radio access networks (O-RANs). The O-RAN segregates the base station into three logical entities; the central unit (CU), the distributed unit (DU) and the radio unit (RU). The CU, DU and RU require low latency, low jitter and high data rate connections for seamless operation, which is known as X-haul. A passive optical network (PON) is a potential solution for X-haul design. However, conventional PON uplink protocols are not inherently suitable for X-haul requirements. The packetization procedure of PON introduces jitter to the X-haul bit stream. Further, the delay requirements of the X-haul limit the number of sources that can be connected to the X-haul. Advanced features like coordinated multipoint requires synchronization among the different X-haul bit streams as well. Therefore, in this paper, we develop an optimal uplink system that allows PON to be used as an X-haul connection technology. The proposal maximizes the throughput of the PON while conforming to the delay and synchronization requirements. Moreover, the proposal nullifies the jitter introduced by the PON scheduler. We have performed extensive simulations for verifying our results. | 10.1109/TNSM.2026.3692242 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Yuxiang Wang, Jiao Zhang, Leixin Cai, Tao Huang | Mercury: Multipath Spraying for Joint Congestion and Reordering Control in RDMA | 2026 | Early Access | Due to the low entropy traffic characteristics of LLM (Large Language Model) training, existing load balancing mechanisms such as Equal-Cost Multi-Path (ECMP) fail to fully utilize the redundant bandwidth between computing nodes in RDMA over Converged Ethernet (RoCE). Packet spraying mechanism has become a typical solution to the load balancing problem in RoCEs. However, it has a negative effect on congestion control mechanisms and suffers severe out-of-order problems. In this paper, we propose Mercury, a host-driven spraying scheme that synergizes congestion feedback and reordering control. Mercury selects paths by leveraging ECN, RTT, and reordering metrics, adjusts rates via multi-metric window. It also employs receiver-side buffers with priority-based dropping to mitigate out-of-order penalties. Evaluations in ns-3 under AllReduce and All-to-All traffic show that Mercury consistently outperforms the ECMP-based baselines, including DCQCN, TIMELY, HPCC, SWIFT, and BOLT, with the largest reduction in Max FCT reaching 63%. Under multi-path load balancing, Mercury delivers the lowest Max FCT for large messages in AllReduce and for most message sizes in All-to-All. It outperforms STRACK and MP-RDMA by up to 28% and 35% in AllReduce, and by up to 25% and 30% in All-to-All. | 10.1109/TNSM.2026.3692452 | |
| Md Facklasur Rahaman, Makhduma F. Saiyed, Irfan Al-Anbagi, Ramakrishna Gokaraju | A Domain-informed Hierarchical Federated Learning Framework for DDoS Detection in WSN for Critical Infrastructure | 2026 | Early Access | The deployment of Wireless Sensor Networks (WSN) in critical infrastructure, such as Small Modular Reactors (SMRs), faces cybersecurity threats like Distributed Denial of Service (DDoS) attacks that can overload these networks and disrupt monitoring and control functions. Current DDoS detection systems often suffer from high false positive rates, neglect domain-specific operational constraints, and rely on centralized architectures that pose privacy risks, making them less suitable for distributed Internet of Things (IoT) environments. To address these issues, we propose a novel Domain-informed Hierarchical Federated Learning (DHFL) framework for WSN used in SMR monitoring and control applications. Our framework features a dual-branch bidirectional Long Short-Term Memory (LSTM) architecture comprising of two parallel processing branches with network-specific constraints, facilitating precise detection of DDoS attacks. It includes differentiable penalty functions to enforce domain-aligned behaviour and employs adaptive trust scoring to evaluate the reliability of individual nodes. These elements operate within a hierarchical Federated Learning (FL) structure organized into three tiers: sensor nodes, local aggregators, and a global coordinator, allowing collaborative training that preserves privacy. Unlike earlier approaches, our method not only maintains privacy by ensuring that raw sensor data never leaves the local nodes and only model updates are shared but also considers the operational importance and trustworthiness of each node through tier-weighted aggregation. Tested on the CICIoT2023 dataset, our system achieved 93.4% accuracy, 94.5% precision, 97.5% recall, 95.5% F1-score, and 98.9% AUC, surpassing state-of-the-art FL methods in both performance and efficiency. Furthermore, it converged in fewer communication rounds (30–50) with reduced communication costs (from 45 MB to 30 MB per round). Our framework can differentiate between normal reactor transients and actual attacks, making it suitable for mission-critical SMR cybersecurity. | 10.1109/TNSM.2026.3693112 | |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Minh-Thuyen Thi, Mohan Gurusamy | Multi-dimensional Cross-granularity Open-set Network Intrusion Detection | 2026 | Early Access | Network intrusion detection systems (NIDSs) face critical challenges from continuously evolving cyber-attacks. Traditional machine learning methods, while requiring extensive labeled training data, still often fail against unknown and out-of-distribution (OOD) attacks. Furthermore, new sophisticated adversaries are exploiting the detection blind spots inherent in traditional feature representation approaches that do not provide adequate comprehensive traffic analysis. In this paper, we propose MDCG-IDS, an NIDS framework that introduces multi-dimensional cross-granularity (MDCG) feature representation for open-set detection, in which network traffic is analyzed thoroughly across three complementary dimensions (traffic statistics, temporal, spatial), each at multiple granularity levels. These dimensions and granularities jointly capture the structures of sophisticated attacks that may be invisible from single analytical perspectives. We design a tensor structure that provides a unified encoding for the MDCG features while supporting the use of optimal transport theory to measure the distance between benign traffic and known or unknown attacks. MDCG-IDS uses a semi-supervised learning model that is trained exclusively on benign traffic and validated on a small set of labeled data, significantly reducing the effort of data labeling. Experiments on various datasets achieve AUC-ROC scores of more than 0.948, exceeding the best competing state-of-the-art methods by up to 7%. Regarding the amount of labeled validating data, MDCG-IDS obtains an AUC-ROC score of over 0.94 with only 3% of entire validating samples, outperforming the baseline models. | 10.1109/TNSM.2026.3693141 | |
| Dinghao Zeng, Fagui Liu, Runbin Chen, Jingwei Tan, Dishi Xu, Qingbo Wu, C.L. Philip Chen | CoreScaler: A Resource-Efficient Hybrid Scaling Framework for Dynamic Workloads in Cloud | 2026 | Early Access | Containerized microservices face significant challenges in balancing service quality and resource efficiency under dynamic workloads. Existing approaches suffer from horizontal scaling’s cold start latency, vertical scaling’s resource ceilings, and hybrid methods’ limited adaptability. We present CoreScaler, a resource-efficient hybrid scaling framework based on analysis of CPU usage patterns revealing substantial consumption differences between working mode and waiting mode instances. This insight drives our dual-mode instance management model that distinguishes between working instances actively handling requests and waiting instances maintaining hot standby with minimal resource allocation. CoreScaler employs a master-subordinate distributed architecture where the master node performs capacity planning using multi-confidence interval predictions and contextual multi-armed bandit optimization, while subordinate nodes execute mode-aware CPU quota adjustments. Comprehensive evaluation on a Kubernetes cluster with a typical microservice system under four representative production work-loads demonstrates that CoreScaler maintains SLO compliance while reducing CPU and memory allocation by 22.53% and 30.83% respectively compared to state-of-the-art solutions. The framework achieves substantially higher resource utilization than single-dimension scaling approaches, validating the effectiveness of coordinated hybrid scaling for dynamic cloud environments. | 10.1109/TNSM.2026.3692955 | |
| Arad Kotzer, Tom Azoulay, Yoad Abels, Aviv Yaish, Ori Rottenstreich | SoK: DeFi Lending and Yield Aggregation Protocol Taxonomy, Empirical Measurements, and Security Challenges | 2026 | Early Access | Filtering Application specific integrated circuits Filters Protocols Smart contracts Communication systems Proof of stake Proof of Work Internet Amplitude shift keying Blockchain Decentralized Finance (DeFi) Lending Yield Aggregation | Decentralized Finance (DeFi) lending protocols implement programmable credit markets without intermediaries. This paper systematizes the DeFi lending ecosystem, spanning collateralized lending (including over- and under- collateralized designs, and zero-liquidation loans), uncollateralized primitives (e.g., flashloans), and yield aggregation protocols which allocate capital across underlying lending platforms. Beyond a taxonomy of mechanisms and comparing protocols, we provide empirical on-chain measurements of lending activity and user behavior, using Compound V2 and AAVE V2 as case studies, and connect empirical observations to protocol design choices (e.g., interestrate models and liquidation incentives). We then characterize vulnerabilities that arise due to notable designs, focusing on interestrate setting mechanisms and time-measurement approaches. Finally, we outline open questions at the intersection of mechanism design, empirical measurement and security for future research. | 10.1109/TNSM.2026.3682174 |
| Abdeltif Azzizi, Mohamad Al Adraa, Chadi Assi, Michael Y. Frankel, Vladimir Pelekhaty | Experimental Topological Analysis in Next-Generation Data Center Networks: STRAT and Clos Topologies | 2026 | Early Access | Telemetry Aerospace and electronic systems Payloads Optical waveguides Optical fibers Broadcasting Broadcast technology Application specific integrated circuits Circuits Feedback Data Center Topologies Clos Topology STRAT Topology Scalability Challenges Network Architecture Performance Evaluation | This paper presents an experimental and simulationbased evaluation of two data center network (DCN) topologies: the widely adopted hierarchical Clos architecture and STRAT, a flat, expander-based topology designed around passive optical interconnects. While Clos offers proven scalability and performance, it incurs hardware complexity and suffers from congestion in oversubscribed scenarios. STRAT eliminates aggregation and spine layers entirely—using only Top-of-Rack (ToR) switches interconnected via static optical patch panels—to reduce cost, simplify deployment, and enhance path diversity. Our goal is to assess these topologies based on their inherent architectural properties—namely throughput, congestion resilience, scalability, and cost—without relying on congestion control protocols or centralized traffic engineering. To this end, we adopt simple forwarding schemes based purely on local information: ECMP for Clos, and ECMP with Dynamic Group Multipath (DGM) for STRAT. We evaluate both topologies on a physical testbed built from commercial Ethernet switches and further validate scalability through packet-level simulations of networks with up to 256 switches and 1,024 hosts using OMNeT++. We also introduce DEALER, a lightweight routing algorithm tailored to STRAT’s topology, and evaluate its effectiveness in dynamic conditions. Our results show that STRAT achieves up to 43% higher throughput and requires approximately 40% fewer switches than a comparable Clos topology. These gains are further supported by Load Area Under Curve (LAUC) analysis and congestion hotspot visualizations. Overall, our study highlights STRAT as a compelling and practical alternative to conventional DCN architectures, offering deployable scalability, improved performance under load, and reduced infrastructure cost. | 10.1109/TNSM.2026.3685175 |
| Ke Gu, Jiaqi Lei, Jingjing Tan, Xiong Li | A Verifiable Federated Learning Scheme With Privacy-Preserving in MCS | 2026 | Vol. 23, Issue | Federated learning Sensors Servers Security Training Protocols Privacy Homomorphic encryption Computational modeling Mobile computing Mobile crowd sensing verifiable federated learning privacy-preserving sampling verification | The popularity of edge smart devices and the explosive growth of generated data have driven the development of mobile crowd sensing (MCS). Also, federated learning (FL), as a new paradigm of privacy-preserving distributed machine learning, integrates with MCS to offer a novel approach for processing large-scale edge device data. However, it also brings about many security risks. In this paper, we propose a verifiable federated learning scheme with privacy-preserving for mobile crowd sensing. In our federated learning scheme, the double-layer random mask partition method combined with homomorphic encryption is constructed to protect the local gradients and enhance system security (strong anti-collusion ability) based on the multi-cluster structure of federated learning. Also, a sampling verification mechanism is proposed to allow the mobile sensing clients to quickly and efficiently verify the correctness of their received gradient aggregation results. Further, a dropout handling mechanism is constructed to improve the robustness of mobile crowd sensing-based federated learning. Related experimental results demonstrate that our verifiable federated learning scheme is effective and efficient in mobile crowd sensing environments. | 10.1109/TNSM.2025.3627581 |
| Zhengge Yi, Tengyao Li, Meng Zhang, Xiaoyun Yuan, Shaoyong Du, Xiangyang Luo | An Efficient Website Fingerprinting for New Websites Emerging Based on Incremental Learning | 2026 | Vol. 23, Issue | Incremental learning Fingerprint recognition Data models Monitoring Accuracy Deep learning Adaptation models Training Telecommunication traffic Feature extraction Website fingerprinting Tor anonymous network traffic analysis incremental learning | Website fingerprinting attacks leverage encrypted traffic features to identify specific services accessed by users within anonymity networks such as Tor. Although existing WF methods achieve high accuracy on static datasets using deep learning techniques, they struggle in dynamic environments where anonymous Websites continually evolve. These methods typically require full retraining on composite datasets, resulting in substantial computational and storage burdens, and are particularly vulnerable to classification bias caused by data imbalance and concept drift. To address these challenges, we propose EIL-WF, a dynamic WF framework based on incremental learning that enables efficient adaptation to newly emerging websites without the need for full retraining. EIL-WF incrementally trains lightweight, independent classifiers for new website classes and integrates them through classifier normalization and energy alignment strategies grounded in energy-based model theory, thereby constructing a unified and robust classification model. Comprehensive experiments on two public Tor traffic datasets demonstrate that EIL-WF outperforms existing incremental learning methods by 6.2%–20.2% in identifying new websites and reduces catastrophic forgetting by 5.4%–20%. Notably, EIL-WF exhibits strong resilience against data imbalance and concept drift, maintaining stable classification performance across evolving distributions. Furthermore, EIL-WF decreases training time during model updates by 2–3 orders of magnitude, demonstrating substantial advantages over conventional full retraining paradigms. | 10.1109/TNSM.2025.3627441 |
| Aruna Malik, Sandeep Verma, Samayveer Singh, Rajeev Kumar, Neeraj Kumar | Greylag Goose-Based Optimized Cluster Routing for IoT-Based Heterogeneous Wireless Sensor Networks | 2026 | Vol. 23, Issue | Wireless sensor networks Energy consumption Clustering algorithms Energy efficiency Routing Internet of Things Heuristic algorithms Sensors Genetic algorithms Throughput Internet of Things energy efficiency greylag goose optimization cluster head network-lifetime | Optimization algorithms are crucial for energy-efficient routing in Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) because they help minimize energy consumption, reduce communication overhead, and improve overall network performance. By optimizing the routing paths and scheduling data transmission, these algorithms can prolong network lifetime by efficiently managing the limited energy resources of sensor nodes, ensuring reliable data delivery while conserving energy. In this work, we present Greylag Goose-based Optimized Clustering (GGOC), which aids in selecting the Cluster Head (CH) using the proposed critical fitness parameters. These parameters include residual energy, sensor sensing range, distance of a candidate node from the sink, number of neighboring nodes, and energy consumption rate. Simulation analysis shows that the proposed approach improves various performance metrics, namely network lifetime, stability period, throughput, the network’s remaining energy, and the number of clusters formed. | 10.1109/TNSM.2025.3627535 |
| Shaocong Feng, Baojiang Cui, Junsong Fu, Meiyi Jiang, Shengjia Chang | Adaptive Target Device Model Identification Attack in 5G Mobile Network | 2026 | Vol. 23, Issue | Object recognition Adaptation models 5G mobile communication Atmospheric modeling Security Communication channels Mobile handsets Radio access networks Feature extraction Baseband 5G device model GUTI EPSFB UE capability | Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer. | 10.1109/TNSM.2025.3626804 |
| Zheng Gao, Danfeng Sun, Jianyong Zhao, Huifeng Wu, Jia Wu | Cost-Minimized Data Edge Access Model for Digital Twin Using Cloud-Edge Collaboration | 2026 | Vol. 23, Issue | Data acquisition Cloud computing Digital twins Costs Edge computing Accuracy Optimization Data models Computational modeling Protocols Data edge access digital twin cloud-edge collaboration edge cost minimization | Industrial applications involving digital twins (e.g., behavior simulation) demand highly accurate, low-latency data, making real-time data acquisition critical. To meet performance demands, devices that do not support asynchronous communication need to acquire data at high frequency. In cloud-edge collaboration schemes, edge computing nodes typically acquire the data. However, high-frequency data acquisition and processing impose considerable costs, posing significant challenges for these resource-constrained nodes. To address this problem, we propose a model called Cost-minimized Data Edge Access (CDEA) that can dynamically minimize the edge costs while satisfying long-term performance requirements. CDEA quantifies data performance by decomposing the workflow of industrial systems into basic action units. These units are used to model data acquisition, data processing, data transmission, and cloud computing. Then, a cost minimization problem is formulated based on these components. To address irregular data changes and the general lack of available statistics on system’s network status, the framework incorporates Lyapunov optimization to transform the long-term guarantee over data performance into a series of instantaneous decision problems. Finally, a heuristic algorithm identifies the optimal data acquisition strategy. To validate CDEA’s effectiveness, we implemented it in two representative digital twin scenarios: cathode plate stripping and AGV transportation. Experimental results demonstrate that CDEA can indeed reduce both edge costs and cloud resources consumption while still ensuring high data performance. | 10.1109/TNSM.2025.3621548 |