Last updated: 2026-01-04 05:01 UTC
All documents
Number of pages: 154
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Yeryeong Cho, Sungwon Yi, Soohyun Park | Joint Multi-Agent Reinforcement Learning and Message-Passing for Resilient Multi-UAV Networks | 2026 | Early Access | This paper introduces a novel resilient algorithm designed for distributed unmanned aerial vehicles (UAVs) in dynamic and unreliable network environments. Initially, the UAVs should be trained via multi-agent reinforcement learning (MARL) for autonomous mission-critical operations and are fundamentally grounded by centralized training and decentralized execution (CTDE) using a centralized MARL server. In this situation, it is crucial to consider the case where several UAVs cannot receive CTDE-based MARL learning parameters for resilient operations in unreliable network conditions. To tackle this issue, a communication graph is used where its edges are established when two UAVs/nodes are communicable. Then, the edge-connected UAVs can share their training data if one of the UAVs cannot be connected to the CTDE-based MARL server under unreliable network conditions. Additionally, the edge cost considers power efficiency. Based on this given communication graph, message-passing is used for electing the UAVs that can provide their MARL learning parameters to their edge-connected peers. Lastly, performance evaluations demonstrate the superiority of our proposed algorithm in terms of power efficiency and resilient UAV task management, outperforming existing benchmark algorithms. | 10.1109/TNSM.2025.3650697 | |
| Seyed Soheil Johari, Massimo Tornatore, Nashid Shahriar, Raouf Boutaba, Aladdin Saleh | Active Learning for Transformer-Based Fault Diagnosis in 5G and Beyond Mobile Networks | 2026 | Vol. 23, Issue | Transformers Fault diagnosis Data models Labeling Training 5G mobile communication Costs Computer architecture Active learning Complexity theory Fault diagnosis active learning transformers | As 5G and beyond mobile networks evolve, their increasing complexity necessitates advanced, automated, and data-driven fault diagnosis methods. While traditional data-driven methods falter with modern network complexities, Transformer models have proven highly effective for fault diagnosis through their efficient processing of sequential and time-series data. However, these Transformer-based methods demand substantial labeled data, which is costly to obtain. To address the lack of labeled data, we propose a novel active learning (AL) approach designed for Transformer-based fault diagnosis, tailored to the time-series nature of network data. AL reduces the need for extensive labeled datasets by iteratively selecting the most informative samples for labeling. Our AL method exploits the interpretability of Transformers, using their attention weights to create dependency graphs that represent processing patterns of data points. By formulating a one-class novelty detection problem on these graphs, we identify whether an unlabeled sample is processed differently from labeled ones in the previous training cycle and designate novel samples for expert annotation. Extensive experiments on real-world datasets show that our AL method achieves higher F1-scores than state-of-the-art AL algorithms with 50% fewer labeled samples and surpasses existing methods by up to 150% in identifying samples related to unseen fault types. | 10.1109/TNSM.2025.3622149 |
| Anurag Dutta, Sangita Roy, Rajat Subhra Chakraborty | RISK-4-Auto: Residually Interconnected and Superimposed Kolmogorov-Arnold Networks for Automotive Network Traffic Classification | 2026 | Vol. 23, Issue | Telecommunication traffic Accuracy Visualization Controller area networks Intrusion detection Histograms Generative adversarial networks Convolutional neural networks Automobiles Training Controller area network (CAN) in-vehicle security Kolmogorov-Arnold Network (KAN) network forensics network traffic classification | In modern automobiles, a Controller Area Network (CAN) bus facilitates communication among all electronic control units for critical safety functions, including steering, braking, and fuel injection. However, due to the lack of security features, it may be vulnerable to malicious bus traffic-based attacks that cause the automobile to malfunction. Such malicious bus traffic can be the result of either external fabricated messages or direct injection through the on-board diagnostic port, highlighting the need for an effective intrusion detection system to efficiently identify suspicious network flows and potential intrusions. This work introduces Residually Interconnected and Superimposed Kolmogorov-Arnold Networks (RISK-4-Auto), a set of four deep neural network architectures for intrusion detection targeting in-vehicle network traffic classification. RISK-4-Auto models, when applied on three hexadecimally identifiable sequence-based open-source datasets (collected through direct injection in the on-board diagnostic port), outperform six state-of-the-art vehicular network intrusion detection systems (as per their accuracies) by $\approx 1.0163$ % for all-class classification and $\approx 2.5535$ % on focused (single-class) malicious flow detection. Additionally, RISK-4-Auto enjoys a significantly lower overhead than existing state-of-the-art models, and is suitable for real-time deployment in resource-constrained automotive environments. | 10.1109/TNSM.2025.3625404 |
| Abdurrahman Elmaghbub, Bechir Hamdaoui | HEEDFUL: Leveraging Sequential Transfer Learning for Robust WiFi Device Fingerprinting Amid Hardware Warm-Up Effects | 2026 | Vol. 23, Issue | Fingerprint recognition Radio frequency Hardware Wireless fidelity Accuracy Performance evaluation Training Wireless communication Estimation Transfer learning WiFi device fingerprinting hardware warm-up consideration hardware impairment estimation sequential transfer learning temporal-domain adaptation | Deep Learning-based RF fingerprinting approaches struggle to perform well in cross-domain scenarios, particularly during hardware warm-up. This often-overlooked vulnerability has been jeopardizing their reliability and their adoption in practical settings. To address this critical gap, in this work, we first dive deep into the anatomy of RF fingerprints, revealing insights into the temporal fingerprinting variations during and post hardware stabilization. Introducing HEEDFUL, a novel framework harnessing sequential transfer learning and targeted impairment estimation, we then address these challenges with remarkable consistency, eliminating blind spots even during challenging warm-up phases. Our evaluation showcases HEEDFUL‘s efficacy, achieving remarkable classification accuracies of up to 96% during the initial device operation intervals—far surpassing traditional models. Furthermore, cross-day and cross-protocol assessments confirm HEEDFUL’s superiority, achieving and maintaining high accuracy during both the stable and initial warm-up phases when tested on WiFi signals. Additionally, we release WiFi type B and N RF fingerprint datasets that, for the first time, incorporate both the time-domain representation and real hardware impairments of the frames. This underscores the importance of leveraging hardware impairment data, enabling a deeper understanding of fingerprints and facilitating the development of more robust RF fingerprinting solutions. | 10.1109/TNSM.2025.3624126 |
| Giovanni Simone Sticca, Memedhe Ibrahimi, Francesco Musumeci, Nicola Di Cicco, Massimo Tornatore | Hollow-Core Fibers for Latency-Constrained and Low-Cost Edge Data Center Networks | 2026 | Vol. 23, Issue | Optical fiber networks Costs Optical fiber communication Data centers Optical fiber devices Optical fibers Optical attenuators Network topology Fiber nonlinear optics Throughput Hollow core fiber edge data centers network cost minimization latency-constrained networks | Recent advancements in Hollow Core Fibers (HCF) production are paving the way toward new ground-breaking opportunities of HCF for 6G-and-beyond applications. While Standard Single-Mode Fibers (SSMF) have been the go-to solution in optical communications for the past 50 years, HCF is expected to be a turning point in how next-generation optical networks are planned and designed. Compared to SSMF, in which the optical signal is transmitted in a silica core, in HCF, the optical signal is transmitted in a hollow, i.e., air, core, significantly reducing latency (by 30%), while also decreasing attenuation (as low as 0.11 dB/km) and non-linearities. In this study, we investigate the optimal placement of HCF in latency-constrained optical networks to minimize the number of edge Data Centers (edgeDCs), while also ensuring physical-layer validation. Given the optimized placement of HCF and edgeDCs, we minimize the overall network cost in terms of transponders (TXPs) and Wavelength Selective Switches (WSSes) by optimizing the type, number, and transmission mode of TXPs, and the type and number of WSSes. We develop a Mixed Integer Nonlinear Programming (MINLP) model and a Genetic Algorithm (GA) to solve these problems. We validate the GA against the MINLP model in four synthetically generated topologies and perform extensive numerical evaluations in a realistic 25-node metro aggregation topology and a 22-node national topology. We show that by upgrading 25% of the links to HCF, we can significantly reduce the number of edgeDCs by up to 40%, while also reducing network equipment cost by up to 38%, compared to an SSMF-only network. | 10.1109/TNSM.2025.3625391 |
| Manjuluri Anil Kumar, Balaprakasa Rao Killi, Eiji Oki | Generative Adversarial Networks Based Low-Rate Denial of Service Attack Detection and Mitigation in Software-Defined Networks | 2026 | Vol. 23, Issue | Protocols Prevention and mitigation Real-time systems Software defined networking Generative adversarial networks Anomaly detection Denial-of-service attack TCP Routing Training LDoS SDN GAN attack detection and mitigation OpenFlow | Low-rate Denial of Service (LDoS) attacks use short, regular bursts of traffic to exploit vulnerabilities in network protocols. They are a major threat to network security, especially in Software-Defined Networking (SDN) frameworks. These attacks are challenging to detect and mitigate because of their low traffic volume, making it impossible to distinguish them from normal traffic. We propose a real-time LDoS attack detection and mitigation framework that can protect SDN. The framework incorporates a detection module that uses a deep learning model, such as a Generative Adversarial Network (GAN), to identify the attack. An efficient mitigation module follows detection, employing mechanisms to identify and filter harmful flows in real time. Deploying the framework into SDN controllers guarantees compliance with OpenFlow standards, thereby avoiding the necessity for additional hardware. Experimental results demonstrate that the proposed system achieves a detection accuracy of over 99.98% with an average response time of 8.58 s, significantly outperforming traditional LDoS detection approaches. This study presents a scalable, real-time methodology to enhance SDN resilience against LDoS attacks. | 10.1109/TNSM.2025.3625278 |
| Samayveer Singh, Aruna Malik, Vikas Tyagi, Rajeev Kumar, Neeraj Kumar, Shakir Khan, Mohd Fazil | Dynamic Energy Management in Heterogeneous Sensor Networks Using Hippopotamus-Inspired Clustering | 2026 | Vol. 23, Issue | Wireless sensor networks Clustering algorithms Optimization Heuristic algorithms Routing Energy efficiency Protocols Scalability Genetic algorithms Batteries Internet of Things energy efficiency cluster head network-lifetime | The rapid expansion of smart technologies and IoT has made Wireless Sensor Networks (WSNs) essential for real-time applications such as industrial automation, environmental monitoring, and healthcare. Despite advances in sensor node technology, energy efficiency remains a key challenge due to the limited battery life of nodes, which often operate in remote environments. Effective clustering, where Cluster Heads (CHs) manage data aggregation and transmission, is crucial for optimizing energy use. Motivated from the above, in this paper, we introduce a novel metaheuristic approach called Hippopotamus Optimization-Based Cluster Head Selection (HO-CHS), designed to enhance CH selection by dynamically considering factors such as residual energy, node location, and network topology. Inspired by natural behaviors, HO-CHS effectively balances energy loads, reduces communication distances, and boosts network scalability and reliability. The proposed scheme achieves a 35% increase in network lifetime and a 40% improvement in stability period in comparison to the other existing schemes in literature. Simulation results demonstrate that HO-CHS significantly reduces energy consumption and enhances data transmission efficiency, making it ideal for IoT-enabled consumer electronics networks requiring consistent performance and energy conservation. | 10.1109/TNSM.2025.3618766 |
| Fabian Graf, David Pauli, Michael Villnow, Thomas Watteyne | Management of 6TiSCH Networks Using CORECONF: A Clustering Use Case | 2026 | Vol. 23, Issue | Protocols IEEE 802.15 Standard Reliability Wireless sensor networks Runtime Wireless communication Interference Wireless fidelity Monitoring Job shop scheduling 6TiSCH CORECONF IEEE 802.15.4 clustering | Industrial low-power wireless sensor networks demand high reliability and adaptability to cope with dynamic environments and evolving network requirements. While the 6TiSCH protocol stack provides reliable low-power communication, the CoAP Management Interface (CORECONF) for runtime management remains underutilized. In this work, we implement CORECONF and introduce clustering as a practical use case. We implement a cluster formation mechanism aligned with the Routing Protocol for Low-Power and Lossy Networks (RPL) and adjust the TSCH channel-hopping sequence within the established clusters. Two use cases are presented. First, CORECONF is used to mitigate external Wi-Fi interference by forming a cluster with a modified channel set that excludes the affected frequencies. Second, CORECONF is employed to create a priority cluster of sensor nodes that require higher reliability and reduced latency, such as those monitoring critical infrastructure in industrial settings. Simulation results show significant improvements in latency, while practical experiments demonstrate a reduction in overall network charge consumption from approximately 50 mC per hour to 23 mC per hour, by adapting the channel set within the interference-affected cluster. | 10.1109/TNSM.2025.3627112 |
| Andrea Detti, Alessandro Favale | Cost-Effective Cloud-Edge Elasticity for Microservice Applications | 2026 | Vol. 23, Issue | Microservice architectures Cloud computing Data centers Load management Costs Frequency modulation Delays Analytical models Edge computing Telemetry Edge computing microservices applications service meshes | Microservice applications, composed of independent containerized components, are well-suited for hybrid cloud–edge deployments. In such environments, placing microservices at the edge can reduce latency but incurs significantly higher resource costs compared to the cloud. This paper addresses the problem of selectively replicating microservices at the edge to ensure that the average user-perceived delay remains below a configurable threshold, while minimizing total deployment cost under a pay-per-use model for CPU, memory, and network traffic. We propose a greedy placement strategy based on a novel analytical model of delay and cost, tailored to synchronous request/response applications in cloud–edge topologies with elastic resource availability. The algorithm leverages telemetry and load balancing capabilities provided by service mesh frameworks to guide edge replication decisions. The proposed approach is implemented in an open-source Kubernetes controller, the Geographical Microservice Autoplacer (GMA), which integrates seamlessly with Istio and Horizontal Pod Autoscalers. GMA automates telemetry collection, cost-aware decision making, and geographically distributed placement. Its effectiveness is demonstrated through simulation and real testbed deployment. | 10.1109/TNSM.2025.3627155 |
| Shaocong Feng, Baojiang Cui, Junsong Fu, Meiyi Jiang, Shengjia Chang | Adaptive Target Device Model Identification Attack in 5G Mobile Network | 2026 | Vol. 23, Issue | Object recognition Adaptation models 5G mobile communication Atmospheric modeling Security Communication channels Mobile handsets Radio access networks Feature extraction Baseband 5G device model GUTI EPSFB UE capability | Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer. | 10.1109/TNSM.2025.3626804 |
| Zhengge Yi, Tengyao Li, Meng Zhang, Xiaoyun Yuan, Shaoyong Du, Xiangyang Luo | An Efficient Website Fingerprinting for New Websites Emerging Based on Incremental Learning | 2026 | Vol. 23, Issue | Incremental learning Fingerprint recognition Data models Monitoring Accuracy Deep learning Adaptation models Training Telecommunication traffic Feature extraction Website fingerprinting Tor anonymous network traffic analysis incremental learning | Website fingerprinting attacks leverage encrypted traffic features to identify specific services accessed by users within anonymity networks such as Tor. Although existing WF methods achieve high accuracy on static datasets using deep learning techniques, they struggle in dynamic environments where anonymous Websites continually evolve. These methods typically require full retraining on composite datasets, resulting in substantial computational and storage burdens, and are particularly vulnerable to classification bias caused by data imbalance and concept drift. To address these challenges, we propose EIL-WF, a dynamic WF framework based on incremental learning that enables efficient adaptation to newly emerging websites without the need for full retraining. EIL-WF incrementally trains lightweight, independent classifiers for new website classes and integrates them through classifier normalization and energy alignment strategies grounded in energy-based model theory, thereby constructing a unified and robust classification model. Comprehensive experiments on two public Tor traffic datasets demonstrate that EIL-WF outperforms existing incremental learning methods by 6.2%–20.2% in identifying new websites and reduces catastrophic forgetting by 5.4%–20%. Notably, EIL-WF exhibits strong resilience against data imbalance and concept drift, maintaining stable classification performance across evolving distributions. Furthermore, EIL-WF decreases training time during model updates by 2–3 orders of magnitude, demonstrating substantial advantages over conventional full retraining paradigms. | 10.1109/TNSM.2025.3627441 |
| Yaqing Zhu, Liquan Chen, Suhui Liu, Bo Yang, Shang Gao | Blockchain-Based Lightweight Key Management Scheme for Secure UAV Swarm Task Allocation | 2026 | Vol. 23, Issue | Autonomous aerial vehicles Encryption Protocols Resource management Receivers Controllability Dynamic scheduling Blockchains Authentication Vehicle dynamics Lightweight certificateless pairing-free key management UAV swarm task allocation | Uncrewed Aerial Vehicle (UAV) swarms are a cornerstone technology in the rapidly growing low-altitude economy, with significant applications in logistics, smart cities, and emergency response. However, their deployment is constrained by challenges in secure communication, dynamic group coordination, and resource constraints. Although there are various cryptographic techniques, efficient and scalable group key management plays a critical role in secure task allocation in UAV swarms. Existing group key agreement schemes, both symmetric and asymmetric, often fail to adequately address these challenges due to their reliance on centralized control, high computational overhead, sender restrictions, and insufficient protection against physical attacks. To address these issues, we propose PCDCB (Pairing-free Certificateless Dynamic Contributory Broadcast encryption), a blockchain-assisted lightweight key management scheme designed for UAV swarm task allocation. PCDCB is particularly suitable for swarm operations as it supports efficient one-to-many broadcast of task commands, enables dynamic node join/leave, and eliminates key escrow by combining certificateless cryptography with Physical Unclonable Functions (PUFs) for hardware-bound key regeneration. Blockchain is used to maintain tamper-resistant update tables and ensure auditability, while a privacy-preserving mechanism with pseudonyms and a round mapping table provides task anonymity and unlinkability. Comprehensive security analysis confirms that PCDCB is secure and resistant to multiple attacks. Performance evaluation shows that, in large-scale swarm scenarios (n = 100), PCDCB reduces the cost of group key computation by 54.4% (up to 96.9%) and reduces the time to generate the decryption keys by at least 29.7%. In addition, PCDCB achieves the lowest communication cost among all compared schemes and demonstrates strong scalability with increasing group size. | 10.1109/TNSM.2025.3636562 |
| Shih-Chun Chien, You-Cheng Chang, Ming-Wei Su, Kate Ching-Ju Lin | Enabling Differentiated Monitoring for Sketch-Based Network Measurements | 2026 | Vol. 23, Issue | Accuracy Monitoring Pipelines Resource management Memory management Probabilistic logic Quality of service Hardware Data structures Traffic control Sketch-based measurements multi-class monitoring differentiated performance guarantee | With the advent of programmable switches, sketch-based measurements have become a powerful tool for traffic monitoring, offering high accuracy with minimal resource overhead. Conventional sketch designs provide uniform accuracy across all traffic classes, failing to address the diverse needs of different network applications. Recent efforts in priority-aware sketch-based measurements have enhanced accuracy for large flows. However, providing explicit guarantees for differentiated accuracy across multiple traffic categories under limited memory resources remains a challenge. To address this limitation, we introduce DiffSketch, a sketch-based measurement system that guarantees differential accuracy across traffic classes. DiffSketch employs a block-based biased hashing design, which dynamically adjusts block sizes and leverages biased hashing techniques to enable probabilistic block access. This design ensures that measurement accuracy aligns with operator-defined differentiation levels while optimizing memory usage. We implement DiffSketch on both bmv2 and Tofino switches, demonstrating that its block-based approach not only guarantees differential performance but also improves overall memory efficiency, reducing measurement errors compared to existing priority-aware solutions. | 10.1109/TNSM.2025.3636478 |
| Josef Koumar, Timotej Smoleň, Kamil Jeřábek, Tomáš Čejka | Comparative Analysis of Deep Learning Models for Real-World ISP Network Traffic Forecasting | 2026 | Vol. 23, Issue | Forecasting Telecommunication traffic Deep learning Predictive models Time series analysis Measurement Monitoring Transformers Analytical models Smoothing methods Neural networks deep learning network traffic forecasting network traffic prediction network monitoring | Accurate network traffic forecasting is crucial for Internet service providers to optimize resources, improve user experience, and detect anomalies. Until recently, the lack of large-scale, real-world datasets limited the fair evaluation of forecasting methods. The newly released CESNET-TimeSeries24 dataset addresses this gap by providing multivariate traffic data from thousands of devices over 40 weeks at multiple aggregation granularities and hierarchy levels. In this study, we leverage the CESNET-TimeSeries24 dataset to conduct a systematic evaluation of state-of-the-art deep learning models and provide practical insights. Moreover, our analysis reveals trade-offs between prediction accuracy and computational efficiency across different levels of granularity. Beyond model comparison, we establish a transparent and reproducible benchmarking framework, releasing source code and experiments to encourage standardized evaluation and accelerate progress in network traffic forecasting research. | 10.1109/TNSM.2025.3636557 |
| Remi Hendriks, Mattijs Jonker, Roland van Rijswijk-Deij, Raffaele Sommese | Load-Balancing Versus Anycast: A First Look at Operational Challenges | 2026 | Vol. 23, Issue | Routing Internet Routing protocols Probes IP networks Costs Tunneling Time measurement Source address validation Servers Anycast load balancing routing stability | Load Balancing (LB) is a routing strategy that increases performance by distributing traffic over multiple outgoing paths. In this work, we introduce a novel methodology to detect the influence of LB on anycast routing, which can be used by operators to detect networks that experience anycast site flipping, where traffic from a single client reaches multiple anycast sites. We use our methodology to measure the effects of LB-behavior on anycast routing at a global scale, covering both IPv4 and IPv6. Our results show that LB-induced anycast site flipping is widespread. The results also show our method can detect LB implementations on the global Internet, including detection and classification of Points-of-Presence (PoP) and egress selection techniques deployed by hypergiants, cloud providers, and network operators. We observe LB-induced site flipping directs distinct flows to different anycast sites with significant latency inflation. In cases with two paths between an anycast instance and a load-balanced destination, we observe an average RTT difference of 30 ms with 8% of load-balanced destinations seeing RTT differences of over 100 ms. Being able to detect these cases can help anycast operators significantly improve their service for affected clients. | 10.1109/TNSM.2025.3636785 |
| Aruna Malik, Sandeep Verma, Samayveer Singh, Rajeev Kumar, Neeraj Kumar | Greylag Goose-Based Optimized Cluster Routing for IoT-Based Heterogeneous Wireless Sensor Networks | 2026 | Vol. 23, Issue | Wireless sensor networks Energy consumption Clustering algorithms Energy efficiency Routing Internet of Things Heuristic algorithms Sensors Genetic algorithms Throughput Internet of Things energy efficiency greylag goose optimization cluster head network-lifetime | Optimization algorithms are crucial for energy-efficient routing in Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) because they help minimize energy consumption, reduce communication overhead, and improve overall network performance. By optimizing the routing paths and scheduling data transmission, these algorithms can prolong network lifetime by efficiently managing the limited energy resources of sensor nodes, ensuring reliable data delivery while conserving energy. In this work, we present Greylag Goose-based Optimized Clustering (GGOC), which aids in selecting the Cluster Head (CH) using the proposed critical fitness parameters. These parameters include residual energy, sensor sensing range, distance of a candidate node from the sink, number of neighboring nodes, and energy consumption rate. Simulation analysis shows that the proposed approach improves various performance metrics, namely network lifetime, stability period, throughput, the network’s remaining energy, and the number of clusters formed. | 10.1109/TNSM.2025.3627535 |
| Seokwon Kang, Md. Shirajum Munir, Choong Seon Hong | Intelligent Supply Chain for Communication Navigation and Caching in Multi-UAV Wireless Network | 2026 | Vol. 23, Issue | Autonomous aerial vehicles Resource management Wireless networks Decision making Collaboration Base stations Real-time systems Three-dimensional displays Prevention and mitigation Navigation Multi-agent deep reinforcement learning joint optimization resource allocation caching prediction | The emergence of uncrewed aerial vehicles (UAVs) in wireless communication has opened up new prospects for improving network performance and user experience. This study aims to create a smart supply chain that enables collaborative communication, navigation, and caching in multi-UAV wireless networks. To achieve this, a user-UAV clustering strategy using the Whale Optimization Algorithm (WOA) optimizes resource distribution and network management. Additionally, Multi-Agent Deep Deterministic Policy Gradients (MADDPG) jointly optimize UAV trajectory planning, bandwidth allocation, and caching decisions, enabling UAVs to enhance trajectory, allocate resources, and manage cached content efficiently. Experimental results show a 10% reduction in average response time and a 12% decrease in system energy consumption, demonstrating the efficiency of the proposed approach in improving data delivery and extending UAV lifespan for sustainable network operations. | 10.1109/TNSM.2025.3629802 |
| Jiachen Liang, Yang Du, He Huang, Yu-E Sun, Guoju Gao, Yonglong Luo | Memory-Efficient and Hardware-Friendly Sketches for Hierarchical Heavy Hitter Detection | 2026 | Vol. 23, Issue | IP networks Memory management Detectors Accuracy High-speed networks Throughput Hardware Electronic mail Telecommunication traffic Periodic structures Sketch hierarchical heavy hitter network traffic measurement | Identifying the hierarchical heavy hitters (HHHs), i.e., the frequent aggregated flows based on common IP prefixes, is a vital task in network traffic measurement and security. Existing methods typically employ dynamic trie structures to track numerous prefixes or utilize multiple separate sketch instances, one for each hierarchical level, to capture HHHs across different levels, while both approaches suffer from low memory efficiency and limited compatibility with programmable switches. In this paper, we introduce two novel HHH detection solutions, respectively, Hierarchical Heavy Detector (HHD) and the Compressed Hierarchical Heavy Detector (CHHD), to achieve high memory efficiency and enhanced hardware compatibility. The key idea of HHD is to design a shared bucket array structure to identify and record HHHs from all hierarchical levels, which avoids the memory wastage of maintaining separate sketches to achieve high memory efficiency and allows feasible deployment of both byte-hierarchy and bit-hierarchy HHH detection on programmable switches using minimal processing stage resources. Additionally, HHD utilizes a sampling-based update strategy to effectively balance packet processing speed and detection accuracy. Furthermore, we present the CHHD, which enhances HHH detection in bit hierarchies through a more compact cell structure, which allows for compressing several ancestor and descendant prefixes within a single cell, further boosting memory efficiency and accuracy. We have implemented HHD and CHHD on a P4-based programmable switch with limited switch resources. Experimental results based on real-world Internet traces demonstrate that HHD and CHHD outperform the state-of-the-art by achieving up to 56 percentage points higher detection precision and $2.6\times $ higher throughput. | 10.1109/TNSM.2025.3635692 |
| Deqiang Zhou, Xinsheng Ji, Wei You, Hang Qiu, Yu Zhao, Mingyan Xu | Intent-Based Automatic Security Enhancement Method Toward Service Function Chain | 2026 | Vol. 23, Issue | Security Translation Servers Adaptation models Automation Virtual private networks Firewalls (computing) Quality of service Network security Network function virtualization SFC security intent automatic security enhancement network security function diverse requirements | The reliance on Network Function Virtualization (NFV) and Software-Defined Network (SDN) introduces a wide variety of security risks in Service Function Chain (SFC), necessitating the implementation of automated security measures to safeguard ongoing service delivery. To address the security risks faced by online SFCs and the shortcomings of traditional manual configuration, we introduce Intent-Based Networking (IBN) for the first time to propose an automatic security enhancement method through embedding Network Security Functions (NSFs). However, the diverse security requirements and performance requirements of SFCs pose significant challenges to the translation from intents to NSF embedding schemes, which manifest in two main aspects. In the logical orchestration stage, NSF composition consisting of NSF sets and their logical embedding locations will significantly impact the security effect. So security intent language model, a formalized method, is proposed to express the security intents. Additionally, NSF Embedding Model Generation Algorithm (EMGA) is designed to determine NSF composition by utilizing NSF capability label model and NSF collaboration model, where NSF composition can be further formulated as NSF embedding model. In the physical embedding stage, the differentiated service requirements among SFCs result in NSF embedded model obtained by EMGA being a multi-objective optimization problem with variable objectives. Therefore, Adaptive Security-aware Embedding Algorithm (ASEA) featuring adaptive link weight mapping mechanism is proposed to solve the optimal NSF embedding schemes. This enables the automatic translation of security intents into NSF embedding schemes, ensuring that both security requirements are met and service performance is guaranteed. We develop the system instance to verify the feasibility of intent translation solution, and massive evaluations demonstrate that ASEA algorithm has better performance compared with the existing works in the diverse requirement scenarios. | 10.1109/TNSM.2025.3635228 |
| Ru Huo, Xiangfeng Cheng, Chuang Sun, Tao Huang | A Cluster-Based Data Transmission Strategy for Blockchain Network in the Industrial Internet of Things | 2026 | Vol. 23, Issue | Blockchains Industrial Internet of Things Edge computing Data communication Computer architecture Topology Cloud computing Industrial Internet of Things (IIoT) blockchain edge computing clustering data transmission strategy | The proliferation of devices and data in the Industrial Internet of Things (IIoT) has rendered the traditional centralized cloud model unable to meet the stringent requirements of wide-scale and low latency in these IIoT scenarios. As emerging technologies, edge computing enables real-time processing and analysis on devices situated closer to the data source while reducing bandwidth requirements. Blockchain, being decentralized, could enhance data security. Therefore, edge computing and blockchain are integrated in IIoT to reduce latency and improve security. However, the inefficient data transmission of blockchain leads to increased transmission latency in the IIoT. To address this issue, we propose a cluster-based data transmission strategy (CDTS) for blockchain network. Initially, an improved weighted label propagation algorithm (WLPA) is proposed for clustering blockchain nodes. Subsequently, a spanning tree topology construction (STTC) is designed to simplify the blockchain network topology, based on the above node clustering results. Additionally, leveraging clustered nodes and tree topology, we propose a data transmission strategy to speed up data transmission. Simulation experiments show that CDTS effectively reduces data transmission time and better supports large-scale IIoT scenarios. | 10.1109/TNSM.2024.3387120 |