Last updated: 2026-01-10 05:01 UTC
All documents
Number of pages: 154
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Wencheng Chen, Jun Wang, Jeng-Shyang Pan, R. Simon Sherratt, Jin Wang | Enhancing the Delegated Proof of Stake Consensus Mechanism for Secure and Efficient Data Storage in the Industrial Internet of Things | 2026 | Early Access | Industrial Internet of Things Security Games Consensus protocol Optimization Analytical models Proof of stake Memory Fifth Industrial Revolution Reliability Industrial Internet of Things(IIoT) Blockchain Data storage Delegated Proof of Stake (DPoS) Consensus mechanism | The rapid advancement of Industry 5.0 has accelerated the adoption of the Industrial Internet of Things (IIoT). However, challenges such as data privacy breaches, malicious attacks, and the absence of trustworthy mechanisms continue to hinder its secure and efficient operation. To overcome these issues, this paper proposes an enhanced blockchain-based data storage framework and systematically improves the Delegated Proof of Stake (DPoS) consensus mechanism. A four-party evolutionary game model is developed, involving agent nodes, voting nodes, malicious nodes, and supervisory nodes, to comprehensively analyze the dynamic effects of key factors—including bribery intensity, malicious costs, supervision, and reputation mechanisms—on system stability. Furthermore, novel incentive and punishment strategies are introduced to foster node collaboration and suppress malicious behaviors. The simulation results show that the improved DPoS mechanism achieves significant enhancements across multiple performance dimensions. Under high-load conditions, the system increases transaction throughput by approximately 5%, reduces consensus latency, and maintains stable operation even as the network scale expands. In adversarial scenarios, the double-spending attack success rate decreases to about 2.6%, indicating strengthened security resilience. In addition, the convergence of strategy evolution is notably accelerated, enabling the system to reach cooperative and stable states more efficiently. These results demonstrate that the proposed mechanism effectively improves the efficiency, security, and dynamic stability of IIoT data storage systems, providing strong support for reliable operation in complex industrial environments. | 10.1109/TNSM.2025.3650612 |
| Yilu Chen, Ye Wang, Ruonan Li, Yujia Xiao, Lichen Liu, Jinlong Li, Yan Jia, Zhaoquan Gu | TrafficAudio: Audio Representation for Lightweight Encrypted Traffic Classification in IoT | 2026 | Early Access | Feature extraction Cryptography Telecommunication traffic Accuracy Malware Vectors Spatiotemporal phenomena Security Intrusion detection Computational efficiency Encrypted traffic classification Malicious traffic detection Mel-frequency cepstral coefficients Traffic representation | Encrypted traffic classification has become a crucial task for network management and security with the widespread adoption of encrypted protocols across the Internet and the Internet of Things. However, existing methods often rely on discrete representations and complex models, which leads to incomplete feature extraction, limited fine-grained classification accuracy, and high computational costs. To this end, we propose TrafficAudio, a novel encrypted traffic classification method based on audio representation. TrafficAudio comprises three modules: audio representation generation (ARG), audio feature extraction (AFE), and spatiotemporal traffic classification (STC). Specifically, the ARG module first represents raw network traffic as audio to preserve temporal continuity of traffic. Then, the audio is processed by the AFE module to compute low-dimensional Mel-frequency cepstral coefficients (MFCC), encoding both temporal and spectral characteristics. Finally, spatiotemporal features are extracted from MFCC through a parallel architecture of one-dimensional convolutional neural network and bidirectional gated recurrent unit layers, enabling fine-grained traffic classification. Experiments on five public datasets across six classification tasks demonstrate that TrafficAudio consistently outperforms ten state-of-the-art baselines, achieving accuracies of 99.74%, 98.40%, 99.76%, 99.25%, 99.77%, and 99.74%. Furthermore, TrafficAudio significantly reduces computational complexity, achieving reductions of 86.88% in floating-point operations and 43.15% of model parameters over the best-performing baseline. | 10.1109/TNSM.2026.3651599 |
| Dongbin He, Aiqun Hu, Xiaochuan He | A Novel Reciprocal Signal Generating Method Based on Network Delay of Random Routing Protocols | 2026 | Early Access | Delays Routing protocols Internet of Things Routing Internet Costs Eavesdropping Logic gates Error correction codes Wireless networks Network delay Tor network key generation | The growing popularity of Internet of Things (IoT) devices raises significant challenges for secure key distribution in wide-area networks. Traditional solutions often face high deployment costs or distance limitations. This paper proposes a novel method that leverages the inherent reciprocity of Internet transmission delay to achieve lightweight symmetric keys distribution. The core of the method lies in the generation of reciprocal delay signals, and then in the enhancement of their randomness through randomized routing protocols and additional artificial delays. Moreover, an eavesdropping model is proposed to analyze single attackers, and a probabilistic framework is established to evaluate security limits against collusion attacks. Furthermore, error correction codes are implemented to allow the raw key to be directly used for encryption, eliminating additional communication overhead. Experimental results demonstrate a correlation coefficient of 0.97 for delay signals in a local area network (LAN), confirming strong reciprocity. On the public internet, the proposed randomness enhancement improves the entropy by 2 bits. Similarly, the correlation coefficient between signals obtained by an eavesdropper and the legitimate party in this wide-area environment ranges from 0.02 to 0.26, indicating that the method is resilient to eavesdropping. This work demonstrates the feasibility of utilizing public network characteristics for secure key distribution among wide-area terminals. | 10.1109/TNSM.2026.3651520 |
| Yeryeong Cho, Sungwon Yi, Soohyun Park | Joint Multi-Agent Reinforcement Learning and Message-Passing for Resilient Multi-UAV Networks | 2026 | Early Access | Servers Heuristic algorithms Autonomous aerial vehicles Training Surveillance Reliability Training data Reinforcement learning Resource management Resilience Multi-Agent System (MAS) Reinforcement Learning (RL) Communication Graph Message Passing Resilient Communication Network Unmanned Aerial Vehicle (UAV) UAVs Networks | This paper introduces a novel resilient algorithm designed for distributed unmanned aerial vehicles (UAVs) in dynamic and unreliable network environments. Initially, the UAVs should be trained via multi-agent reinforcement learning (MARL) for autonomous mission-critical operations and are fundamentally grounded by centralized training and decentralized execution (CTDE) using a centralized MARL server. In this situation, it is crucial to consider the case where several UAVs cannot receive CTDE-based MARL learning parameters for resilient operations in unreliable network conditions. To tackle this issue, a communication graph is used where its edges are established when two UAVs/nodes are communicable. Then, the edge-connected UAVs can share their training data if one of the UAVs cannot be connected to the CTDE-based MARL server under unreliable network conditions. Additionally, the edge cost considers power efficiency. Based on this given communication graph, message-passing is used for electing the UAVs that can provide their MARL learning parameters to their edge-connected peers. Lastly, performance evaluations demonstrate the superiority of our proposed algorithm in terms of power efficiency and resilient UAV task management, outperforming existing benchmark algorithms. | 10.1109/TNSM.2025.3650697 |
| Andrea Detti, Alessandro Favale | Cost-Effective Cloud-Edge Elasticity for Microservice Applications | 2026 | Vol. 23, Issue | Microservice architectures Cloud computing Data centers Load management Costs Frequency modulation Delays Analytical models Edge computing Telemetry Edge computing microservices applications service meshes | Microservice applications, composed of independent containerized components, are well-suited for hybrid cloud–edge deployments. In such environments, placing microservices at the edge can reduce latency but incurs significantly higher resource costs compared to the cloud. This paper addresses the problem of selectively replicating microservices at the edge to ensure that the average user-perceived delay remains below a configurable threshold, while minimizing total deployment cost under a pay-per-use model for CPU, memory, and network traffic. We propose a greedy placement strategy based on a novel analytical model of delay and cost, tailored to synchronous request/response applications in cloud–edge topologies with elastic resource availability. The algorithm leverages telemetry and load balancing capabilities provided by service mesh frameworks to guide edge replication decisions. The proposed approach is implemented in an open-source Kubernetes controller, the Geographical Microservice Autoplacer (GMA), which integrates seamlessly with Istio and Horizontal Pod Autoscalers. GMA automates telemetry collection, cost-aware decision making, and geographically distributed placement. Its effectiveness is demonstrated through simulation and real testbed deployment. | 10.1109/TNSM.2025.3627155 |
| Giovanni Pettorru, Marco Martalò | A Persistent and Secure Publish-Subscriber Architecture for Low-Latency IoT Communications | 2026 | Vol. 23, Issue | Internet of Things Protocols Low latency communication Security HTTP Servers Telemetry TCP Standards Logic gates Internet of Things (IoT) security low latency computational complexity QUIC WebSocket (WS) Message Queuing Telemetry Transport (MQTT) | Secure and low-latency data exchange is gaining more and more attention in Internet of Things (IoT) applications. To achieve such stringent requirements, we propose to combine persistent connections and TLS session ticket resumption, as in WebSocket (WS) and QUIC, respectively. Considering the nodes of an IoT cluster as a single virtual entity, we propose to integrate an innovative network management strategy, which employs a publish-subscribe (Pub/Sub) architecture based on the Message Queuing Telemetry Transport (MQTT) protocol, for TLS session tickets sharing between cluster nodes to mitigate the session initialization latency. The proposed system is referred to as WS over QUIC and MQTT (WSQM) and its performance is experimentally assessed with IoT-compliant devices. Our results show that WSQM reduces the latency if compared with similar alternatives that rely on Transmission Control Protocol (TCP) and Transport Layer Security (TLS), as well as other QUIC-based protocols such as the HyperText Transfer Protocol version 3 (HTTP/3). Moreover, WSQM achieves minimal resource utilization in terms of percentage of RAM and CPU usage, thus highlighting its ability to meet the critical requirements of IoT applications. | 10.1109/TNSM.2025.3635212 |
| Aruna Malik, Sandeep Verma, Samayveer Singh, Rajeev Kumar, Neeraj Kumar | Greylag Goose-Based Optimized Cluster Routing for IoT-Based Heterogeneous Wireless Sensor Networks | 2026 | Vol. 23, Issue | Wireless sensor networks Energy consumption Clustering algorithms Energy efficiency Routing Internet of Things Heuristic algorithms Sensors Genetic algorithms Throughput Internet of Things energy efficiency greylag goose optimization cluster head network-lifetime | Optimization algorithms are crucial for energy-efficient routing in Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) because they help minimize energy consumption, reduce communication overhead, and improve overall network performance. By optimizing the routing paths and scheduling data transmission, these algorithms can prolong network lifetime by efficiently managing the limited energy resources of sensor nodes, ensuring reliable data delivery while conserving energy. In this work, we present Greylag Goose-based Optimized Clustering (GGOC), which aids in selecting the Cluster Head (CH) using the proposed critical fitness parameters. These parameters include residual energy, sensor sensing range, distance of a candidate node from the sink, number of neighboring nodes, and energy consumption rate. Simulation analysis shows that the proposed approach improves various performance metrics, namely network lifetime, stability period, throughput, the network’s remaining energy, and the number of clusters formed. | 10.1109/TNSM.2025.3627535 |
| Zhengge Yi, Tengyao Li, Meng Zhang, Xiaoyun Yuan, Shaoyong Du, Xiangyang Luo | An Efficient Website Fingerprinting for New Websites Emerging Based on Incremental Learning | 2026 | Vol. 23, Issue | Incremental learning Fingerprint recognition Data models Monitoring Accuracy Deep learning Adaptation models Training Telecommunication traffic Feature extraction Website fingerprinting Tor anonymous network traffic analysis incremental learning | Website fingerprinting attacks leverage encrypted traffic features to identify specific services accessed by users within anonymity networks such as Tor. Although existing WF methods achieve high accuracy on static datasets using deep learning techniques, they struggle in dynamic environments where anonymous Websites continually evolve. These methods typically require full retraining on composite datasets, resulting in substantial computational and storage burdens, and are particularly vulnerable to classification bias caused by data imbalance and concept drift. To address these challenges, we propose EIL-WF, a dynamic WF framework based on incremental learning that enables efficient adaptation to newly emerging websites without the need for full retraining. EIL-WF incrementally trains lightweight, independent classifiers for new website classes and integrates them through classifier normalization and energy alignment strategies grounded in energy-based model theory, thereby constructing a unified and robust classification model. Comprehensive experiments on two public Tor traffic datasets demonstrate that EIL-WF outperforms existing incremental learning methods by 6.2%–20.2% in identifying new websites and reduces catastrophic forgetting by 5.4%–20%. Notably, EIL-WF exhibits strong resilience against data imbalance and concept drift, maintaining stable classification performance across evolving distributions. Furthermore, EIL-WF decreases training time during model updates by 2–3 orders of magnitude, demonstrating substantial advantages over conventional full retraining paradigms. | 10.1109/TNSM.2025.3627441 |
| Shaocong Feng, Baojiang Cui, Junsong Fu, Meiyi Jiang, Shengjia Chang | Adaptive Target Device Model Identification Attack in 5G Mobile Network | 2026 | Vol. 23, Issue | Object recognition Adaptation models 5G mobile communication Atmospheric modeling Security Communication channels Mobile handsets Radio access networks Feature extraction Baseband 5G device model GUTI EPSFB UE capability | Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer. | 10.1109/TNSM.2025.3626804 |
| Soyinka Nath, Sujata Sengar, Shree Prakash Singh | Dynamic Re-Sizing of Co-Existing Optical-RF Networks for Enhancing Resilience | 2026 | Vol. 23, Issue | Radio frequency Resilience Optical fiber communication Optical fiber networks Heuristic algorithms 6G mobile communication Meteorology Robustness Quality of service Computer network reliability Co-existing optical wireless free space optical resilience cognitive radio | In this article, a novel dynamic network re-sizing strategy for enhancing the resilience of a static coexisting optical – RF (Co-OPRF) network is presented. The resilience of Co-OPRF network is evaluated in terms of outage probability which may serve as a metric of Quality-of Service (QoS). The deployment strategy permits the size of Co-OPRF network to be modified on a dynamic basis while supporting failing optical links. The proposed resilience governed dynamic network re-sizing (RGDRS) algorithm aims to improve the resilience of Co-OPRF network against fluctuating detrimental local optical channel conditions by taking into account the link outage, optical power investment and interference by the RF links. The algorithm for achieving dynamic network re-sizing relies on two tables which gives the necessary trade-offs between link outage and optical power. The presented work shows the calculation for obtaining these tables and corresponding impact on network performance. | 10.1109/TNSM.2025.3597437 |
| Houshan Zhang, Jianhua Yuan, Kaixiang Hu, Caixia Kou | Mobile-Edge Computation Servers Repairing for IoT Network Failures Under Uncertainty | 2026 | Vol. 23, Issue | Maintenance engineering Disasters Cloud computing Servers Internet of Things Uncertainty Stochastic processes Costs Training Optimization Mobile-edge computing Internet of Things massive disruption edge server failure network recovery stochastic optimization Benders decomposition supermodular cut | Mobile-edge computation edge servers (ESs) act as crucial intermediaries between mobile devices (MDs) and cloud computing centers, ensuring low-latency computation and reliable IoT network performance. However, regional failures induced by natural or man-made disasters can severely disrupt ES operations. This paper focuses on devising an effective repair scheme for ESs after large-scale disasters, particularly under the uncertainty of secondary disasters and the stochastic demands of MDs. We formulate the edge server repair problem (ESRP) as a mixed-integer linear programming (MILP) model and prove its NP-hardness. To effectively solve this problem, we propose a solution algorithm based on Benders decomposition (BD). The MILP problem is decomposed into a more solvable master problem and a series of subproblems, which are solved alternately and iteratively. Furthermore, we enhance our BD algorithm by incorporating two types of initial cuts to reduce the iterative number and accelerate the convergence speed. The first type is known as supermodular cut, which exploits the inherent properties of the problem. The second type is called aggregation capacity constraint, which exploits the capacity characteristics of ESs. Extensive simulation results demonstrate that the proposed method excels in ES repairing under large-scale IoT network failure scenarios. Compared to other available algorithms, our BD-based approach more efficiently achieves the exact solution to the ESRP, minimizing network demand loss within limited resources. | 10.1109/TNSM.2025.3598372 |
| Fabian Graf, David Pauli, Michael Villnow, Thomas Watteyne | Management of 6TiSCH Networks Using CORECONF: A Clustering Use Case | 2026 | Vol. 23, Issue | Protocols IEEE 802.15 Standard Reliability Wireless sensor networks Runtime Wireless communication Interference Wireless fidelity Monitoring Job shop scheduling 6TiSCH CORECONF IEEE 802.15.4 clustering | Industrial low-power wireless sensor networks demand high reliability and adaptability to cope with dynamic environments and evolving network requirements. While the 6TiSCH protocol stack provides reliable low-power communication, the CoAP Management Interface (CORECONF) for runtime management remains underutilized. In this work, we implement CORECONF and introduce clustering as a practical use case. We implement a cluster formation mechanism aligned with the Routing Protocol for Low-Power and Lossy Networks (RPL) and adjust the TSCH channel-hopping sequence within the established clusters. Two use cases are presented. First, CORECONF is used to mitigate external Wi-Fi interference by forming a cluster with a modified channel set that excludes the affected frequencies. Second, CORECONF is employed to create a priority cluster of sensor nodes that require higher reliability and reduced latency, such as those monitoring critical infrastructure in industrial settings. Simulation results show significant improvements in latency, while practical experiments demonstrate a reduction in overall network charge consumption from approximately 50 mC per hour to 23 mC per hour, by adapting the channel set within the interference-affected cluster. | 10.1109/TNSM.2025.3627112 |
| Samayveer Singh, Aruna Malik, Vikas Tyagi, Rajeev Kumar, Neeraj Kumar, Shakir Khan, Mohd Fazil | Dynamic Energy Management in Heterogeneous Sensor Networks Using Hippopotamus-Inspired Clustering | 2026 | Vol. 23, Issue | Wireless sensor networks Clustering algorithms Optimization Heuristic algorithms Routing Energy efficiency Protocols Scalability Genetic algorithms Batteries Internet of Things energy efficiency cluster head network-lifetime | The rapid expansion of smart technologies and IoT has made Wireless Sensor Networks (WSNs) essential for real-time applications such as industrial automation, environmental monitoring, and healthcare. Despite advances in sensor node technology, energy efficiency remains a key challenge due to the limited battery life of nodes, which often operate in remote environments. Effective clustering, where Cluster Heads (CHs) manage data aggregation and transmission, is crucial for optimizing energy use. Motivated from the above, in this paper, we introduce a novel metaheuristic approach called Hippopotamus Optimization-Based Cluster Head Selection (HO-CHS), designed to enhance CH selection by dynamically considering factors such as residual energy, node location, and network topology. Inspired by natural behaviors, HO-CHS effectively balances energy loads, reduces communication distances, and boosts network scalability and reliability. The proposed scheme achieves a 35% increase in network lifetime and a 40% improvement in stability period in comparison to the other existing schemes in literature. Simulation results demonstrate that HO-CHS significantly reduces energy consumption and enhances data transmission efficiency, making it ideal for IoT-enabled consumer electronics networks requiring consistent performance and energy conservation. | 10.1109/TNSM.2025.3618766 |
| Ning Zhao, Dongke Zhao, Huiyan Zhang, Yongchao Liu, Liang Zhang | Resilient Dynamic Event-Triggered Fuzzy Tracking Control for Nonlinear Systems Under Hybrid Attacks | 2026 | Vol. 23, Issue | Event detection Fuzzy systems Denial-of-service attack Stability analysis Nonlinear systems Communication channels Wireless networks Resists Multi-agent systems Fuzzy sets Takagi–Sugeno fuzzy systems deception attacks denial-of-service attacks tracking control resilient event-triggered strategy | This article investigates the issue of event-triggered tracking control for Takagi–Sugeno fuzzy systems subject to hybrid attacks. First, the deception attacks occurring on the feedback channel are considered using a Bernoulli process, in which an attacker injects state-dependent malicious signals. Next, the minimal ‘silent’ and maximal ‘active’ periods are defined to describe the duration of aperiodic denial-of-service (DoS) attacks. To take advantage of communication bandwidth and resist DoS attacks, a sampled data-based resilient dynamic event-triggered strategy is designed. Then, an event-based fuzzy tracking controller is designed to guarantee the stability of error system under hybrid attacks. Subsequently, sufficient conditions for the stability analysis are proposed by utilizing a fuzzy-basis-dependent Lyapunov-Krasovskii functional. Meanwhile, the control gains and event-triggering parameters are co-designed by applying linear matrix inequalities. Furthermore, the proposed method is extended to address the tracking control problem of multi-agent systems. Finally, the feasibility of the presented approach is validated by two examples. | 10.1109/TNSM.2025.3625395 |
| Anurag Dutta, Sangita Roy, Rajat Subhra Chakraborty | RISK-4-Auto: Residually Interconnected and Superimposed Kolmogorov-Arnold Networks for Automotive Network Traffic Classification | 2026 | Vol. 23, Issue | Telecommunication traffic Accuracy Visualization Controller area networks Intrusion detection Histograms Generative adversarial networks Convolutional neural networks Automobiles Training Controller area network (CAN) in-vehicle security Kolmogorov-Arnold Network (KAN) network forensics network traffic classification | In modern automobiles, a Controller Area Network (CAN) bus facilitates communication among all electronic control units for critical safety functions, including steering, braking, and fuel injection. However, due to the lack of security features, it may be vulnerable to malicious bus traffic-based attacks that cause the automobile to malfunction. Such malicious bus traffic can be the result of either external fabricated messages or direct injection through the on-board diagnostic port, highlighting the need for an effective intrusion detection system to efficiently identify suspicious network flows and potential intrusions. This work introduces Residually Interconnected and Superimposed Kolmogorov-Arnold Networks (RISK-4-Auto), a set of four deep neural network architectures for intrusion detection targeting in-vehicle network traffic classification. RISK-4-Auto models, when applied on three hexadecimally identifiable sequence-based open-source datasets (collected through direct injection in the on-board diagnostic port), outperform six state-of-the-art vehicular network intrusion detection systems (as per their accuracies) by $\approx 1.0163$ % for all-class classification and $\approx 2.5535$ % on focused (single-class) malicious flow detection. Additionally, RISK-4-Auto enjoys a significantly lower overhead than existing state-of-the-art models, and is suitable for real-time deployment in resource-constrained automotive environments. | 10.1109/TNSM.2025.3625404 |
| Manjuluri Anil Kumar, Balaprakasa Rao Killi, Eiji Oki | Generative Adversarial Networks Based Low-Rate Denial of Service Attack Detection and Mitigation in Software-Defined Networks | 2026 | Vol. 23, Issue | Protocols Prevention and mitigation Real-time systems Software defined networking Generative adversarial networks Anomaly detection Denial-of-service attack TCP Routing Training LDoS SDN GAN attack detection and mitigation OpenFlow | Low-rate Denial of Service (LDoS) attacks use short, regular bursts of traffic to exploit vulnerabilities in network protocols. They are a major threat to network security, especially in Software-Defined Networking (SDN) frameworks. These attacks are challenging to detect and mitigate because of their low traffic volume, making it impossible to distinguish them from normal traffic. We propose a real-time LDoS attack detection and mitigation framework that can protect SDN. The framework incorporates a detection module that uses a deep learning model, such as a Generative Adversarial Network (GAN), to identify the attack. An efficient mitigation module follows detection, employing mechanisms to identify and filter harmful flows in real time. Deploying the framework into SDN controllers guarantees compliance with OpenFlow standards, thereby avoiding the necessity for additional hardware. Experimental results demonstrate that the proposed system achieves a detection accuracy of over 99.98% with an average response time of 8.58 s, significantly outperforming traditional LDoS detection approaches. This study presents a scalable, real-time methodology to enhance SDN resilience against LDoS attacks. | 10.1109/TNSM.2025.3625278 |
| Giovanni Simone Sticca, Memedhe Ibrahimi, Francesco Musumeci, Nicola Di Cicco, Massimo Tornatore | Hollow-Core Fibers for Latency-Constrained and Low-Cost Edge Data Center Networks | 2026 | Vol. 23, Issue | Optical fiber networks Costs Optical fiber communication Data centers Optical fiber devices Optical fibers Optical attenuators Network topology Fiber nonlinear optics Throughput Hollow core fiber edge data centers network cost minimization latency-constrained networks | Recent advancements in Hollow Core Fibers (HCF) production are paving the way toward new ground-breaking opportunities of HCF for 6G-and-beyond applications. While Standard Single-Mode Fibers (SSMF) have been the go-to solution in optical communications for the past 50 years, HCF is expected to be a turning point in how next-generation optical networks are planned and designed. Compared to SSMF, in which the optical signal is transmitted in a silica core, in HCF, the optical signal is transmitted in a hollow, i.e., air, core, significantly reducing latency (by 30%), while also decreasing attenuation (as low as 0.11 dB/km) and non-linearities. In this study, we investigate the optimal placement of HCF in latency-constrained optical networks to minimize the number of edge Data Centers (edgeDCs), while also ensuring physical-layer validation. Given the optimized placement of HCF and edgeDCs, we minimize the overall network cost in terms of transponders (TXPs) and Wavelength Selective Switches (WSSes) by optimizing the type, number, and transmission mode of TXPs, and the type and number of WSSes. We develop a Mixed Integer Nonlinear Programming (MINLP) model and a Genetic Algorithm (GA) to solve these problems. We validate the GA against the MINLP model in four synthetically generated topologies and perform extensive numerical evaluations in a realistic 25-node metro aggregation topology and a 22-node national topology. We show that by upgrading 25% of the links to HCF, we can significantly reduce the number of edgeDCs by up to 40%, while also reducing network equipment cost by up to 38%, compared to an SSMF-only network. | 10.1109/TNSM.2025.3625391 |
| Abdurrahman Elmaghbub, Bechir Hamdaoui | HEEDFUL: Leveraging Sequential Transfer Learning for Robust WiFi Device Fingerprinting Amid Hardware Warm-Up Effects | 2026 | Vol. 23, Issue | Fingerprint recognition Radio frequency Hardware Wireless fidelity Accuracy Performance evaluation Training Wireless communication Estimation Transfer learning WiFi device fingerprinting hardware warm-up consideration hardware impairment estimation sequential transfer learning temporal-domain adaptation | Deep Learning-based RF fingerprinting approaches struggle to perform well in cross-domain scenarios, particularly during hardware warm-up. This often-overlooked vulnerability has been jeopardizing their reliability and their adoption in practical settings. To address this critical gap, in this work, we first dive deep into the anatomy of RF fingerprints, revealing insights into the temporal fingerprinting variations during and post hardware stabilization. Introducing HEEDFUL, a novel framework harnessing sequential transfer learning and targeted impairment estimation, we then address these challenges with remarkable consistency, eliminating blind spots even during challenging warm-up phases. Our evaluation showcases HEEDFUL‘s efficacy, achieving remarkable classification accuracies of up to 96% during the initial device operation intervals—far surpassing traditional models. Furthermore, cross-day and cross-protocol assessments confirm HEEDFUL’s superiority, achieving and maintaining high accuracy during both the stable and initial warm-up phases when tested on WiFi signals. Additionally, we release WiFi type B and N RF fingerprint datasets that, for the first time, incorporate both the time-domain representation and real hardware impairments of the frames. This underscores the importance of leveraging hardware impairment data, enabling a deeper understanding of fingerprints and facilitating the development of more robust RF fingerprinting solutions. | 10.1109/TNSM.2025.3624126 |
| Akhila Rao, Magnus Boman | Self-Supervised Pretraining for User Performance Prediction Under Scarce Data Conditions | 2026 | Vol. 23, Issue | Generators Training Self-supervised learning Predictive models Noise Data models Data augmentation Base stations Vectors Adaptation models User performance prediction telecom networks mobile networks machine learning self-supervised learning structured data tabular data generalizability sample efficiency | Predicting user performance at the base station in telecom networks is a critical task that can significantly benefit from advanced machine learning techniques. However, labeled data for user performance are scarce and costly to collect, while unlabeled data consisting of base station metrics, are more readily accessible. Self-supervised learning provides a means to leverage this unlabeled data, and has seen remarkable success in the domains of computer vision and natural language processing, with unstructured data. Recently, these methods have been adapted to structured data as well, making them particularly relevant to the telecom domain. We apply self-supervised learning to predict user performance in telecom networks. Our results demonstrate that even with simple self-supervised approaches, the percentage of variance of the target values explained by the model, in low-labeled scenarios (e.g., only 100 labeled samples) can be improved fourfold, from 15% to 60%. Moreover, to promote reproducibility and further research in the domain, we open-source a dataset creation framework and a specific dataset created from it that captures scenarios that have been deemed to be challenging for future networks. | 10.1109/TNSM.2025.3622892 |
| Bing Shi, Zhifeng Chen, Zhuohan Xu | A Deep Reinforcement Learning Based Approach for Optimizing Trajectory and Frequency in Energy Constrained Multi-UAV Assisted MEC System | 2026 | Vol. 23, Issue | Autonomous aerial vehicles Task analysis Trajectory Optimization Servers Computer architecture Computational modeling Mobile edge computing uncrewed aerial vehicle multi-agent deep reinforcement learning | Mobile Edge Computing (MEC) is a technology that shows great promise in enhancing the computational power of smart devices (SDs) in the Internet of Things (IoT). However, the fixed location and limited coverage of MEC servers constrain their performance. To overcome this issue, this paper explores a multiple uncrewed aerial vehicle (UAV) assisted MEC system. The proposed system considers a scenario where multiple UAVs work together to provide computing services while dynamically adjusting their frequency based on the task size, under the constraint of limited energy. This paper aims to maximize computation bits, SDs’ fairness, and UAVs’ load balancing in multi-UAV assisted MEC system by jointly optimizing the trajectory and frequency. To address this challenge, we model it as a Partially Observable Markov Decision Process and propose a joint optimization strategy based on multi-agent deep reinforcement learning. The effectiveness of the proposed strategy is evaluated on both synthetic and realistic datasets. The results demonstrate that our strategy outperforms other benchmark strategies. | 10.1109/TNSM.2024.3362949 |