Last updated: 2026-01-07 05:01 UTC
All documents
Number of pages: 154
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Wencheng Chen, Jun Wang, Jeng-Shyang Pan, R. Simon Sherratt, Jin Wang | Enhancing the Delegated Proof of Stake Consensus Mechanism for Secure and Efficient Data Storage in the Industrial Internet of Things | 2026 | Early Access | Industrial Internet of Things Security Games Consensus protocol Optimization Analytical models Proof of stake Memory Fifth Industrial Revolution Reliability Industrial Internet of Things(IIoT) Blockchain Data storage Delegated Proof of Stake (DPoS) Consensus mechanism | The rapid advancement of Industry 5.0 has accelerated the adoption of the Industrial Internet of Things (IIoT). However, challenges such as data privacy breaches, malicious attacks, and the absence of trustworthy mechanisms continue to hinder its secure and efficient operation. To overcome these issues, this paper proposes an enhanced blockchain-based data storage framework and systematically improves the Delegated Proof of Stake (DPoS) consensus mechanism. A four-party evolutionary game model is developed, involving agent nodes, voting nodes, malicious nodes, and supervisory nodes, to comprehensively analyze the dynamic effects of key factors—including bribery intensity, malicious costs, supervision, and reputation mechanisms—on system stability. Furthermore, novel incentive and punishment strategies are introduced to foster node collaboration and suppress malicious behaviors. The simulation results show that the improved DPoS mechanism achieves significant enhancements across multiple performance dimensions. Under high-load conditions, the system increases transaction throughput by approximately 5%, reduces consensus latency, and maintains stable operation even as the network scale expands. In adversarial scenarios, the double-spending attack success rate decreases to about 2.6%, indicating strengthened security resilience. In addition, the convergence of strategy evolution is notably accelerated, enabling the system to reach cooperative and stable states more efficiently. These results demonstrate that the proposed mechanism effectively improves the efficiency, security, and dynamic stability of IIoT data storage systems, providing strong support for reliable operation in complex industrial environments. | 10.1109/TNSM.2025.3650612 |
| Yeryeong Cho, Sungwon Yi, Soohyun Park | Joint Multi-Agent Reinforcement Learning and Message-Passing for Resilient Multi-UAV Networks | 2026 | Early Access | Servers Heuristic algorithms Autonomous aerial vehicles Training Surveillance Reliability Training data Reinforcement learning Resource management Resilience Multi-Agent System (MAS) Reinforcement Learning (RL) Communication Graph Message Passing Resilient Communication Network Unmanned Aerial Vehicle (UAV) UAVs Networks | This paper introduces a novel resilient algorithm designed for distributed unmanned aerial vehicles (UAVs) in dynamic and unreliable network environments. Initially, the UAVs should be trained via multi-agent reinforcement learning (MARL) for autonomous mission-critical operations and are fundamentally grounded by centralized training and decentralized execution (CTDE) using a centralized MARL server. In this situation, it is crucial to consider the case where several UAVs cannot receive CTDE-based MARL learning parameters for resilient operations in unreliable network conditions. To tackle this issue, a communication graph is used where its edges are established when two UAVs/nodes are communicable. Then, the edge-connected UAVs can share their training data if one of the UAVs cannot be connected to the CTDE-based MARL server under unreliable network conditions. Additionally, the edge cost considers power efficiency. Based on this given communication graph, message-passing is used for electing the UAVs that can provide their MARL learning parameters to their edge-connected peers. Lastly, performance evaluations demonstrate the superiority of our proposed algorithm in terms of power efficiency and resilient UAV task management, outperforming existing benchmark algorithms. | 10.1109/TNSM.2025.3650697 |
| Yilu Chen, Ye Wang, Ruonan Li, Yujia Xiao, Lichen Liu, Jinlong Li, Yan Jia, Zhaoquan Gu | TrafficAudio: Audio Representation for Lightweight Encrypted Traffic Classification in IoT | 2026 | Early Access | Encrypted traffic classification has become a crucial task for network management and security with the widespread adoption of encrypted protocols across the Internet and the Internet of Things. However, existing methods often rely on discrete representations and complex models, which leads to incomplete feature extraction, limited fine-grained classification accuracy, and high computational costs. To this end, we propose TrafficAudio, a novel encrypted traffic classification method based on audio representation. TrafficAudio comprises three modules: audio representation generation (ARG), audio feature extraction (AFE), and spatiotemporal traffic classification (STC). Specifically, the ARG module first represents raw network traffic as audio to preserve temporal continuity of traffic. Then, the audio is processed by the AFE module to compute low-dimensional Mel-frequency cepstral coefficients (MFCC), encoding both temporal and spectral characteristics. Finally, spatiotemporal features are extracted from MFCC through a parallel architecture of one-dimensional convolutional neural network and bidirectional gated recurrent unit layers, enabling fine-grained traffic classification. Experiments on five public datasets across six classification tasks demonstrate that TrafficAudio consistently outperforms ten state-of-the-art baselines, achieving accuracies of 99.74%, 98.40%, 99.76%, 99.25%, 99.77%, and 99.74%. Furthermore, TrafficAudio significantly reduces computational complexity, achieving reductions of 86.88% in floating-point operations and 43.15% of model parameters over the best-performing baseline. | 10.1109/TNSM.2026.3651599 | |
| Dongbin He, Aiqun Hu, Xiaochuan He | A Novel Reciprocal Signal Generating Method Based on Network Delay of Random Routing Protocols | 2026 | Early Access | The growing popularity of Internet of Things (IoT) devices raises significant challenges for secure key distribution in wide-area networks. Traditional solutions often face high deployment costs or distance limitations. This paper proposes a novel method that leverages the inherent reciprocity of Internet transmission delay to achieve lightweight symmetric keys distribution. The core of the method lies in the generation of reciprocal delay signals, and then in the enhancement of their randomness through randomized routing protocols and additional artificial delays. Moreover, an eavesdropping model is proposed to analyze single attackers, and a probabilistic framework is established to evaluate security limits against collusion attacks. Furthermore, error correction codes are implemented to allow the raw key to be directly used for encryption, eliminating additional communication overhead. Experimental results demonstrate a correlation coefficient of 0.97 for delay signals in a local area network (LAN), confirming strong reciprocity. On the public internet, the proposed randomness enhancement improves the entropy by 2 bits. Similarly, the correlation coefficient between signals obtained by an eavesdropper and the legitimate party in this wide-area environment ranges from 0.02 to 0.26, indicating that the method is resilient to eavesdropping. This work demonstrates the feasibility of utilizing public network characteristics for secure key distribution among wide-area terminals. | 10.1109/TNSM.2026.3651520 | |
| Hai Anh Tran, Nam-Thang Hoang | Towards Efficient and Adaptive Traffic Classification: A Knowledge Distillation-Based Personalized Federated Learning Framework | 2026 | Vol. 23, Issue | Adaptation models Training Federated learning Data models Telecommunication traffic Computational modeling Accuracy Knowledge engineering Heterogeneous networks Knowledge transfer Personalized federated learning knowledge distillation traffic classification heterogeneous network systems adaptive model personalization | Traffic classification plays a crucial role in optimizing network management, enhancing security, and enabling intelligent resource allocation in distributed network systems. However, traditional Federated Learning (FL) approaches struggle with domain heterogeneity, as network traffic characteristics vary significantly across different domains due to diverse infrastructure, applications, and usage patterns. This results in degraded performance when applying a single global model across all domains. To overcome this challenge, we propose KD-PFL-TC, a Knowledge Distillation-based Personalized Federated Learning framework for Traffic Classification, aimed to balance global knowledge sharing with personalized model adaptation in heterogeneous network environments. Our approach leverages knowledge distillation to enable collaborative learning without directly sharing raw data, preserving privacy while mitigating the negative effects of domain shifts. Each domain refines its local model by integrating insights from a global model and peer domains while maintaining its unique traffic distribution. To further enhance performance, we introduce an adaptive distillation strategy that dynamically adjusts the influence of global, peer, and local knowledge based on the similarity between traffic distributions, ensuring optimal knowledge transfer designed to each domain’s characteristics. Extensive experiments on real-world traffic datasets show that KD–PFL–TC maintains 88.0% accuracy under high heterogeneity (vs. 75.0% for FedAvg) while reducing communication overhead by ~60%, delivering an efficient and robust solution for large-scale, heterogeneous networks. | 10.1109/TNSM.2025.3629241 |
| Ning Zhao, Dongke Zhao, Huiyan Zhang, Yongchao Liu, Liang Zhang | Resilient Dynamic Event-Triggered Fuzzy Tracking Control for Nonlinear Systems Under Hybrid Attacks | 2026 | Vol. 23, Issue | Event detection Fuzzy systems Denial-of-service attack Stability analysis Nonlinear systems Communication channels Wireless networks Resists Multi-agent systems Fuzzy sets Takagi–Sugeno fuzzy systems deception attacks denial-of-service attacks tracking control resilient event-triggered strategy | This article investigates the issue of event-triggered tracking control for Takagi–Sugeno fuzzy systems subject to hybrid attacks. First, the deception attacks occurring on the feedback channel are considered using a Bernoulli process, in which an attacker injects state-dependent malicious signals. Next, the minimal ‘silent’ and maximal ‘active’ periods are defined to describe the duration of aperiodic denial-of-service (DoS) attacks. To take advantage of communication bandwidth and resist DoS attacks, a sampled data-based resilient dynamic event-triggered strategy is designed. Then, an event-based fuzzy tracking controller is designed to guarantee the stability of error system under hybrid attacks. Subsequently, sufficient conditions for the stability analysis are proposed by utilizing a fuzzy-basis-dependent Lyapunov-Krasovskii functional. Meanwhile, the control gains and event-triggering parameters are co-designed by applying linear matrix inequalities. Furthermore, the proposed method is extended to address the tracking control problem of multi-agent systems. Finally, the feasibility of the presented approach is validated by two examples. | 10.1109/TNSM.2025.3625395 |
| F. Busacca, L. Galluccio, S. Palazzo, A. Panebianco, R. Raftopoulos | Bandits Under the Waves: A Fully-Distributed Multi-Armed Bandit Framework for Modulation Adaptation in the Internet of Underwater Things | 2026 | Vol. 23, Issue | Throughput Scalability Training Propagation losses Mathematical models Energy consumption Adaptation models Absorption Support vector machines Internet Underwater communications underwater modulation adaptation reinforcement learning multi-player multi-armed bandit | Acoustic communications are the most exploited technology in the so-called Internet of Underwater Things (IoUT). UnderWater (UW) environments are often characterized by harsh propagation features, limited bandwidth, fast-varying channel conditions, and long propagation delay. On the other hand, IoUT nodes are usually battery-powered devices with limited processing capabilities. Accordingly, it is necessary to design optimization algorithms to address the challenging propagation features while balancing them with the limited device capabilities. To address the constraints of the nodes in energy and processing resources, it is crucial to adjust the transmission parameters based on the channel conditions while also developing communication procedures that are both lightweight and energy-efficient. In this work, we introduce a novel Multi-Player Multi-Armed Bandit (MP-MAB) framework for modulation adaptation in Multi-Hop IoUT Acoustic Networks. As opposed to widely used, computation-demanding Deep Reinforcement Learning (DRL) techniques, MP-MAB algorithms are simple and lightweight and allow to iteratively make decisions by selecting one among multiple choices, or arms. The framework is fully-distributed and is able to dynamically select the best modulation technique at each IoUT node by leveraging on high-level statistics (e.g., network throughput), without the need to exploit hard-to-extract channel features (e.g., channel state). We evaluate the performance of the proposed framework using the DESERT UW simulator and compare it with state-of-the-art centralized solutions based on Deep Reinforcement Learning (DRL) for cognitive and heterogeneous networks, namely DRL-MCS, DRL-AM, PPO, SAC, as well as with a multiple-agent, distributed version of the PPO. The results highlight that, despite its simplicity and fully-distributed nature, the proposed framework achieves superior performance in UW networks in terms of throughput, convergence speed, and energy efficiency. Compared to DRL-MCS and DRL-AM, our approach improves network throughput by up to 33% and 20%, respectively, and reduces energy consumption by up to 18% and 16%. When compared to PPO, SAC, and Multi-PPO, the proposed solution achieves up to 11%, 34%, and 38% higher throughput, and up to 7%, 17%, and 33% lower energy consumption, respectively. | 10.1109/TNSM.2025.3629240 |
| Leonardo Lo Schiavo, Genoveva García, Marco Gramaglia, Marco Fiore, Albert Banchs, Xavier Costa-Perez | The TES Framework: Joint Statistical Modeling and Machine Learning for Network KPI Forecasting | 2026 | Vol. 23, Issue | Predictive models Forecasting Time series analysis Adaptation models Load modeling Deep learning Autonomous networks Context modeling Accuracy Transformers Forecasting prediction mobile traffic network KPI network management neural networks statistical modeling | The vision of intelligent networks capable of automatically configuring crucial parameters for tasks such as resource provisioning, anomaly detection or load balancing largely hinges upon efficient AI-based algorithms. Time series forecasting is a fundamental building block for network-oriented AI and current trends lean towards the systematic adoption of models based on deep learning approaches. In this paper, we pave the way for a different strategy for the design of predictors for mobile network environments, and we propose the Thresholded Exponential Smoothing (TES) framework, a hybrid Statistical Modeling and Deep Learning tool that allows for improving the performance of network Key Performance Indicator (KPI) forecasting. We adapt our framework to two state-of-the-art deep learning tools for time series forecasting, based on Recurrent Neural Networks and Transformer architectures. We experiment with TES by showcasing its superior support for three practical network management use cases, i.e., (i) anticipatory allocation of network resources, (ii) mobile traffic anomaly prediction, and (iii) mobile traffic load balancing. Our results, derived from traffic measurements collected in operational mobile networks, demonstrate that the TES framework can yield substantial performance gains over current state-of-the-art predictors in the applications considered. | 10.1109/TNSM.2025.3628788 |
| Hojjat Navidan, Cristian Martín, Vasilis Maglogiannis, Dries Naudts, Manuel Díaz, Ingrid Moerman, Adnan Shahid | An End-to-End Digital Twin Framework for Dynamic Traffic Analytics in O-RAN | 2026 | Vol. 23, Issue | Open RAN Adaptation models Real-time systems Biological system modeling 5G mobile communication Predictive models Traffic control Incremental learning Anomaly detection Data models Digital twin generative AI open radio access networks incremental learning traffic analytics traffic prediction anomaly detection | Dynamic traffic patterns and shifts in traffic distribution in Open Radio Access Networks (O-RAN) pose a significant challenge for real-time network optimization in 5G and beyond. Traditional traffic analytics methods struggle to remain accurate under such non-stationary conditions, where models trained on historical data quickly degrade as traffic evolves. This paper introduces AIDITA, an AI-driven Digital Twin for Traffic Analytics framework designed to solve this problem through autonomous model adaptation. AIDITA creates a digital replica of the live analytics models running in the RAN Intelligent Controller (RIC) and continuously updates them within the digital twin using incremental learning. These updates use real-time Key Performance Metrics (KPMs) from the live network, augmented with synthetic data from a Generative AI (GenAI) component to simulate diverse network scenarios. Combining GenAI-driven augmentation with incremental learning enables traffic analytics models, such as prediction or anomaly detection, to adapt continuously without the need for full retraining, preserving accuracy and efficiency in dynamic environments. Implemented and validated on a real-world 5G testbed, our AIDITA framework demonstrates significant improvements in traffic prediction and anomaly detection use cases under distribution shifts, showcasing its practical effectiveness and adaptability for real-time network optimization in O-RAN deployments. | 10.1109/TNSM.2025.3628756 |
| Guiyun Liu, Hao Li, Lihao Xiong, Zhongwei Liang, Xiaojing Zhong | Attention-Model-Based Multiagent Reinforcement Learning for Combating Malware Propagation in Internet of Underwater Things | 2026 | Vol. 23, Issue | Malware Mathematical models Predictive models Optimal control Prediction algorithms Adaptation models Wireless communication Optimization Network topology Vehicle dynamics Internet of Underwater Things (IoUT) malware fractional-order model model-based reinforcement learning (MBRL) | Malware propagation in Internet of Underwater Things (IoUT) can disrupt stable communications among wireless devices. Timely control over its spread is beneficial for the stable operation of IoUT. Notably, the instability of the underwater environment causes the propagation effects of malware to vary continuously. Traditional control methods cannot quickly adapt to these abrupt changes. In recent years, the rapid development of reinforcement learning (RL) has significantly advanced control schemes. However, previous RL methods relied on long-term interactions to obtain a large amount of interaction data in order to form effective strategy. Given the particularity of underwater communication media, data collection for RL in IoUT is challenging. Therefore, improving sample efficiency has become a critical issue that current RL methods need to address urgently. The algorithm of Attention-Model-Based Multiagent Policy Optimization (AMBMPO) is proposed to achieve efficient use of data samples in this study. First, the algorithm employs an explicit prediction model to reduce the dependence on precise model. Secondly, an attention mechanism network is designed to capture high-dimensional state sequences, thereby reducing the compound errors during policy training. Finally, the proposed method is validated for optimal control problems and compared with verified benchmarks. The experimental results show that, compared with existing advanced RL algorithms, AMBMPO demonstrates significant advantages in sample efficiency and stability. This work effectively controls the spread of malware in underwater systems through an interactive evolution based approach. It provides a new implementation approach for ensuring the safety of underwater systems in deep-sea exploration and environmental monitoring applications. | 10.1109/TNSM.2025.3628881 |
| Ke Gu, Jiaqi Lei, Jingjing Tan, Xiong Li | A Verifiable Federated Learning Scheme With Privacy-Preserving in MCS | 2026 | Vol. 23, Issue | Federated learning Sensors Servers Security Training Protocols Privacy Homomorphic encryption Computational modeling Mobile computing Mobile crowd sensing verifiable federated learning privacy-preserving sampling verification | The popularity of edge smart devices and the explosive growth of generated data have driven the development of mobile crowd sensing (MCS). Also, federated learning (FL), as a new paradigm of privacy-preserving distributed machine learning, integrates with MCS to offer a novel approach for processing large-scale edge device data. However, it also brings about many security risks. In this paper, we propose a verifiable federated learning scheme with privacy-preserving for mobile crowd sensing. In our federated learning scheme, the double-layer random mask partition method combined with homomorphic encryption is constructed to protect the local gradients and enhance system security (strong anti-collusion ability) based on the multi-cluster structure of federated learning. Also, a sampling verification mechanism is proposed to allow the mobile sensing clients to quickly and efficiently verify the correctness of their received gradient aggregation results. Further, a dropout handling mechanism is constructed to improve the robustness of mobile crowd sensing-based federated learning. Related experimental results demonstrate that our verifiable federated learning scheme is effective and efficient in mobile crowd sensing environments. | 10.1109/TNSM.2025.3627581 |
| Aruna Malik, Sandeep Verma, Samayveer Singh, Rajeev Kumar, Neeraj Kumar | Greylag Goose-Based Optimized Cluster Routing for IoT-Based Heterogeneous Wireless Sensor Networks | 2026 | Vol. 23, Issue | Wireless sensor networks Energy consumption Clustering algorithms Energy efficiency Routing Internet of Things Heuristic algorithms Sensors Genetic algorithms Throughput Internet of Things energy efficiency greylag goose optimization cluster head network-lifetime | Optimization algorithms are crucial for energy-efficient routing in Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) because they help minimize energy consumption, reduce communication overhead, and improve overall network performance. By optimizing the routing paths and scheduling data transmission, these algorithms can prolong network lifetime by efficiently managing the limited energy resources of sensor nodes, ensuring reliable data delivery while conserving energy. In this work, we present Greylag Goose-based Optimized Clustering (GGOC), which aids in selecting the Cluster Head (CH) using the proposed critical fitness parameters. These parameters include residual energy, sensor sensing range, distance of a candidate node from the sink, number of neighboring nodes, and energy consumption rate. Simulation analysis shows that the proposed approach improves various performance metrics, namely network lifetime, stability period, throughput, the network’s remaining energy, and the number of clusters formed. | 10.1109/TNSM.2025.3627535 |
| Zhengge Yi, Tengyao Li, Meng Zhang, Xiaoyun Yuan, Shaoyong Du, Xiangyang Luo | An Efficient Website Fingerprinting for New Websites Emerging Based on Incremental Learning | 2026 | Vol. 23, Issue | Incremental learning Fingerprint recognition Data models Monitoring Accuracy Deep learning Adaptation models Training Telecommunication traffic Feature extraction Website fingerprinting Tor anonymous network traffic analysis incremental learning | Website fingerprinting attacks leverage encrypted traffic features to identify specific services accessed by users within anonymity networks such as Tor. Although existing WF methods achieve high accuracy on static datasets using deep learning techniques, they struggle in dynamic environments where anonymous Websites continually evolve. These methods typically require full retraining on composite datasets, resulting in substantial computational and storage burdens, and are particularly vulnerable to classification bias caused by data imbalance and concept drift. To address these challenges, we propose EIL-WF, a dynamic WF framework based on incremental learning that enables efficient adaptation to newly emerging websites without the need for full retraining. EIL-WF incrementally trains lightweight, independent classifiers for new website classes and integrates them through classifier normalization and energy alignment strategies grounded in energy-based model theory, thereby constructing a unified and robust classification model. Comprehensive experiments on two public Tor traffic datasets demonstrate that EIL-WF outperforms existing incremental learning methods by 6.2%–20.2% in identifying new websites and reduces catastrophic forgetting by 5.4%–20%. Notably, EIL-WF exhibits strong resilience against data imbalance and concept drift, maintaining stable classification performance across evolving distributions. Furthermore, EIL-WF decreases training time during model updates by 2–3 orders of magnitude, demonstrating substantial advantages over conventional full retraining paradigms. | 10.1109/TNSM.2025.3627441 |
| Shaocong Feng, Baojiang Cui, Junsong Fu, Meiyi Jiang, Shengjia Chang | Adaptive Target Device Model Identification Attack in 5G Mobile Network | 2026 | Vol. 23, Issue | Object recognition Adaptation models 5G mobile communication Atmospheric modeling Security Communication channels Mobile handsets Radio access networks Feature extraction Baseband 5G device model GUTI EPSFB UE capability | Enhanced system capacity is one of 5G goals. This will lead to massive heterogeneous devices in mobile networks. Mobile devices that lack basic security capability have chipset, operating system or software vulnerability. Attackers can perform Advanced Persistent Threat (APT) Attack for specific device models. In this paper, we propose an Adaptive Target Device Model Identification Attack (ATDMIA) that provides the prior knowledge for exploiting baseband vulnerability to perform targeted attacks. We discovered Globally Unique Temporary Identity (GUTI) Reuse in Evolved Packet Switching Fallback (EPSFB) and Leakage of User Equipment (UE) Capability vulnerability. Utilizing silent calls, an attacker can capture and correlate the signaling traces of the target subscriber from air interface within a specific geographic area. In addition, we design an adaptive identification algorithm which utilizes both invisible and explicit features of UE capability information to efficiently identify device models. We conducted an empirical study using 105 commercial devices, including network configuration, attack efficiency, time overhead and open-world evaluation experiments. The experimental results showed that ATDMIA can accurately correlate the EPSFB signaling traces of target victim and effectively identify the device model or manufacturer. | 10.1109/TNSM.2025.3626804 |
| Andrea Detti, Alessandro Favale | Cost-Effective Cloud-Edge Elasticity for Microservice Applications | 2026 | Vol. 23, Issue | Microservice architectures Cloud computing Data centers Load management Costs Frequency modulation Delays Analytical models Edge computing Telemetry Edge computing microservices applications service meshes | Microservice applications, composed of independent containerized components, are well-suited for hybrid cloud–edge deployments. In such environments, placing microservices at the edge can reduce latency but incurs significantly higher resource costs compared to the cloud. This paper addresses the problem of selectively replicating microservices at the edge to ensure that the average user-perceived delay remains below a configurable threshold, while minimizing total deployment cost under a pay-per-use model for CPU, memory, and network traffic. We propose a greedy placement strategy based on a novel analytical model of delay and cost, tailored to synchronous request/response applications in cloud–edge topologies with elastic resource availability. The algorithm leverages telemetry and load balancing capabilities provided by service mesh frameworks to guide edge replication decisions. The proposed approach is implemented in an open-source Kubernetes controller, the Geographical Microservice Autoplacer (GMA), which integrates seamlessly with Istio and Horizontal Pod Autoscalers. GMA automates telemetry collection, cost-aware decision making, and geographically distributed placement. Its effectiveness is demonstrated through simulation and real testbed deployment. | 10.1109/TNSM.2025.3627155 |
| Fabian Graf, David Pauli, Michael Villnow, Thomas Watteyne | Management of 6TiSCH Networks Using CORECONF: A Clustering Use Case | 2026 | Vol. 23, Issue | Protocols IEEE 802.15 Standard Reliability Wireless sensor networks Runtime Wireless communication Interference Wireless fidelity Monitoring Job shop scheduling 6TiSCH CORECONF IEEE 802.15.4 clustering | Industrial low-power wireless sensor networks demand high reliability and adaptability to cope with dynamic environments and evolving network requirements. While the 6TiSCH protocol stack provides reliable low-power communication, the CoAP Management Interface (CORECONF) for runtime management remains underutilized. In this work, we implement CORECONF and introduce clustering as a practical use case. We implement a cluster formation mechanism aligned with the Routing Protocol for Low-Power and Lossy Networks (RPL) and adjust the TSCH channel-hopping sequence within the established clusters. Two use cases are presented. First, CORECONF is used to mitigate external Wi-Fi interference by forming a cluster with a modified channel set that excludes the affected frequencies. Second, CORECONF is employed to create a priority cluster of sensor nodes that require higher reliability and reduced latency, such as those monitoring critical infrastructure in industrial settings. Simulation results show significant improvements in latency, while practical experiments demonstrate a reduction in overall network charge consumption from approximately 50 mC per hour to 23 mC per hour, by adapting the channel set within the interference-affected cluster. | 10.1109/TNSM.2025.3627112 |
| Samayveer Singh, Aruna Malik, Vikas Tyagi, Rajeev Kumar, Neeraj Kumar, Shakir Khan, Mohd Fazil | Dynamic Energy Management in Heterogeneous Sensor Networks Using Hippopotamus-Inspired Clustering | 2026 | Vol. 23, Issue | Wireless sensor networks Clustering algorithms Optimization Heuristic algorithms Routing Energy efficiency Protocols Scalability Genetic algorithms Batteries Internet of Things energy efficiency cluster head network-lifetime | The rapid expansion of smart technologies and IoT has made Wireless Sensor Networks (WSNs) essential for real-time applications such as industrial automation, environmental monitoring, and healthcare. Despite advances in sensor node technology, energy efficiency remains a key challenge due to the limited battery life of nodes, which often operate in remote environments. Effective clustering, where Cluster Heads (CHs) manage data aggregation and transmission, is crucial for optimizing energy use. Motivated from the above, in this paper, we introduce a novel metaheuristic approach called Hippopotamus Optimization-Based Cluster Head Selection (HO-CHS), designed to enhance CH selection by dynamically considering factors such as residual energy, node location, and network topology. Inspired by natural behaviors, HO-CHS effectively balances energy loads, reduces communication distances, and boosts network scalability and reliability. The proposed scheme achieves a 35% increase in network lifetime and a 40% improvement in stability period in comparison to the other existing schemes in literature. Simulation results demonstrate that HO-CHS significantly reduces energy consumption and enhances data transmission efficiency, making it ideal for IoT-enabled consumer electronics networks requiring consistent performance and energy conservation. | 10.1109/TNSM.2025.3618766 |
| Haiyuan Li, Yuelin Liu, Hari Madhukumar, Amin Emami, Xueqing Zhou, Yulei Wu, Xenofon Vasilakos, Shuangyi Yan, Dimitra Simeonidou | Incremental DRL-Based Resource Management for Dynamic Network Slicing in an Urban-Wide Testbed | 2026 | Vol. 23, Issue | Resource management Energy consumption Servers Network slicing Heuristic algorithms Load modeling 5G mobile communication Training Dynamic scheduling Quality of service Multi-access edge computing network slicing incremental learning MADDPG testbed deployment | Multi-access edge computing provides localized resources within mobile networks to address the requirements of emerging latency-sensitive and computing-intensive applications. At the edge, dynamic requests necessitate sophisticated resource management for adaptive network slicing. This involves optimizing resource allocations, scaling functions, and load balancing to utilize only essential resources under constrained network scenarios. However, existing solutions largely assume static slice counts, ignoring the re-optimization overhead associated with management algorithms when slices fluctuate. Moreover, many approaches rely on simplified energy models that overlook intertemporal resource scheduling and are predominantly evaluated through simulations, neglecting critical practical considerations. This paper presents an incremental cooperative Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm for resource management in dynamic edge slicing. The proposed approach optimizes long-term slicing benefits by reducing delay and energy consumption while minimizing retraining overhead in response to slice variations. Furthermore, we implement an urban-wide edge computing testbed based on OpenStack and Kubernetes to validate the algorithm’s performance. Experimental results demonstrate that our incremental MADDPG method outperforms benchmark strategies in aggregated slicing utility and reduces training energy consumption by up to 50% compared to the re-optimization approach. | 10.1109/TNSM.2025.3633927 |
| Meixia Miao, Peihong Qiang, Siqi Zhao, Jiawei Li, Guohua Tian, Jianghong Wei | Verifiable Data Streaming Protocol Supporting Keyword Queries | 2026 | Vol. 23, Issue | Protocols Cloud computing Trees (botanical) Security Servers Indexes Cryptography Outsourcing Hash functions Databases Verifiable data streaming prefix tree chameleon authentication tree keyword query | The rapid deployment of emerging networks, such as the Internet of Things and cloud computing, has generated massive amounts of data. Data streaming is significant among these various data types due to its widespread use in many critical applications, such as gene sequencing, network intrusion detection, and stock trading. On the other hand, the continuously increased size of data streaming makes it impractical to store and manage the data locally, especially for those resource-constrained devices. Outsourcing the data streaming to cloud servers provides an ideal solution to the above storage issue. However, this raises the problem of how to guarantee the integrity of the outsourced data, as cloud servers may maliciously modify the data. To this end, the primitive of verifiable data streaming (VDS) was introduced to preserve the integrity of the outsourced data streaming, enabling data users to ensure that queried data items, including the contents and corresponding positions, are correct. Despite many proposed VDS protocols, most can only use the position index to query outsourced data streaming. Consequently, they fail to fulfill the requirements of those practical applications that need keyword queries. For example, in the setting of network intrusion detection, the data analyst would like to query all access records from the same IP address. In this paper, we extend the original VDS protocol to support keyword queries, i.e., allowing data users to retrieve outsourced data items with particular keywords. Specifically, we use a prefix tree to maintain keywords and another chameleon authentication tree to store data items. The two trees are bound together with cryptographic query proofs, ensuring the consistency between the position index and keyword queries. The proposed VDS protocol, which supports keyword queries, is proven secure in the standard model and outperforms previous VDS protocols in terms of functionality. The experimental results indicate that our proposal is also efficient and practical. | 10.1109/TNSM.2025.3629071 |
| Haftay Gebreslasie Abreha, Ilora Maity, Youssouf Drif, Christos Politis, Symeon Chatzinotas | Revenue-Aware Seamless Content Distribution in Satellite-Terrestrial Integrated Networks | 2026 | Vol. 23, Issue | Satellites Topology User experience Network topology Delays Real-time systems Optimization Low earth orbit satellites Collaboration Servers Satellite edge computing (SEC) content caching content distribution dynamic ad insertion | With the surging demand for data-intensive applications, ensuring seamless content delivery in Satellite-Terrestrial Integrated Networks (STINs) is crucial, especially for remote users. Dynamic Ad Insertion (DAI) enhances monetization and user experience, while Mobile Edge Computing (MEC) in STINs enables distributed content caching and ad insertion. However, satellite mobility and time-varying topologies cause service disruptions, while excessive or poorly placed ads risk user disengagement, impacting revenue. This paper proposes a novel framework that jointly addresses three challenges: (i) service continuity- and topology-aware content caching to adapt to STIN dynamics, (ii) Distributed DAI (D-DAI) that minimizes feeder link load and storage overhead by avoiding redundant ad-variant content storage through distributed ad stitching, and (iii) revenue-aware content distribution that explicitly models user disengagement due to ad overload to balance monetization and user satisfaction. We formulate the problem as two hierarchical Integer Linear Programming (ILP) optimizations: one content caching that aims to maximize cache hit rate and another optimizing content distribution with DAI to maximize revenue, minimize end-user costs, and enhance user experience. We develop greedy algorithms for fast initialization and a Binary Particle Swarm Optimization (BPSO)–based strategy for enhanced performance. Simulation results demonstrate that the proposed approach achieves over a 4.5% increase in revenue and reduces cache retrieval delay by more than 39% compared to the benchmark algorithms. | 10.1109/TNSM.2025.3629810 |