Last updated: 2025-11-12 05:01 UTC
All documents
Number of pages: 150
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Dániel Unyi, Ernő Rigó, Bálint Gyires-Tóth, Róbert Lovas | Explainable GNN-Based Approach to Fault Forecasting in Cloud Service Debugging | 2025 | Early Access | Debugging Microservice architectures Cloud computing Reliability Observability Computer architecture Graph neural networks Monitoring Probabilistic logic Fault diagnosis Cloud computing Software debugging Microservice architectures Deep learning Graph neural networks Explainable AI Fault prediction | Debugging cloud services is increasingly challenging due to their distributed, dynamic, and scalable nature. Traditional methods struggle to handle large state spaces and the complex interactions between microservices, making it difficult to diagnose failures and identify critical components. This paper presents a Graph Neural Network (GNN)-based approach that enhances cloud service debugging by predicting system-level fault probabilities and providing interpretable insights into failure propagation. Our method models microservice interactions as graphs, where failures propagate probabilistically. Using Markov Decision Processes (MDPs), we simulate failure behaviors, capturing the probabilistic dependencies that influence system reliability. The trained GNN not only predicts fault probabilities but also identifies the most failure-prone microservices and explains their impact. We evaluate our approach on various service mesh structures, including feature-enriched, tree-structured, and general directed acyclic graph (DAG) architectures. Results indicate that our method is effective in the operational phase of cloud services, enabling proactive debugging and targeted optimization. This work represents a step toward more interpretable, reliable, and maintainable cloud infrastructures. | 10.1109/TNSM.2025.3602223 |
| Frkei Saleh, Abraham O. Fapojuwo, Diwakar Krishnamurthy | eSlice: Elastic Inter-Slice Resource Allocation for Smart City Applications | 2025 | Early Access | Resource management Smart cities 5G mobile communication Ultra reliable low latency communication Dynamic scheduling Substrates Network slicing Heuristic algorithms Real-time systems Vehicle dynamics Inter-slice Network slicing Smart city applications Dynamic Resource allocation | Network slicing is a fundamental enabler for the advancement of fifth generation (5G) and beyond 5G (B5G) networks, offering customized service-level agreements (SLAs) for distinct slices such as enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communication (URLLC). However, smart city applications often require multiple slices concurrently, posing significant challenges in resource allocation, service isolation, and maintaining performance guarantees. This paper presents eSlice, an elastic inter-slice resource allocation mechanism specifically designed to address the dynamic requirements of smart city applications. eSlice organizes applications into hierarchical slices, leveraging cloud-native resource scaling to dynamically adapt to real-time demands. It integrates two novel algorithms: the Proactive eSlice Allocation Algorithm (PeSAA), which ensures the fair distribution of resources across the substrate network, and the Reactive eSlice Allocation Algorithm (ReSAA), which employs Multi-Agent Reinforcement Learning (MARL) to dynamically coordinate, reallocate, and recover unused resources as network conditions evolve. Experimental results demonstrate that eSlice significantly outperforms existing methods, achieving 94.3% resource utilization in simulation-based experiments under constrained urban-scale scenarios, providing a robust solution for dynamic resource management in 5G-enabled smart city networks. | 10.1109/TNSM.2025.3604352 |
| Cheng Long, Haoming Zhang, Zixiao Wang, Yiming Zheng, Zonghui Li | FastScheduler: Polynomial-Time Scheduling for Time-Triggered Flows in TSN | 2025 | Early Access | Job shop scheduling Network topology Dynamic scheduling Real-time systems Heuristic algorithms Delays Ethernet Schedules Deep reinforcement learning Training Time-Sensitive Network Online Scheduling Algorithm Industrial Control | Time-Sensitive Networking (TSN) has emerged as a promising network paradigm for time-critical applications, such as industrial control, where flow scheduling is crucial to ensure low latency and determinism. As production flexibility demands increase, network topology and flow requirements may change, necessitating more efficient TSN scheduling algorithms to guarantee real-time and deterministic data transmission. In this work, we present FastScheduler, a polynomial-time, deterministic TSN scheduler, which can schedule thousands of Time-Triggered (TT) flows within arbitrary network topologies. The key innovations of FastScheduler include an Equivalent Reduction Technique to simplify the generic model while preserving the feasible scheduling space, a Deterministic Heuristic Strategy to ensure a consistent and reproducible scheduling process, and a Polynomial-Time Scheduling Algorithm to perform dynamic and real-time scheduling of periodic TT flows. Extensive experiments on various topologies show that FastScheduler can effectively simplify the model, reducing variables/constraints by 35%/62%, and schedule 1,000 TT flows in subsecond time. Furthermore, it runs 2/3 orders of magnitude faster and improves the schedulability by 12%/20% compared to heuristic/deep reinforcement learning-based methods. FastScheduler is well-suited for the dynamic requirements of industrial control networks. | 10.1109/TNSM.2025.3603844 |
| Wenjun Fan, Na Fan, Junhui Zhang, Jia Liu, Yifan Dai | Securing VNDN With Multi-Indicator Intrusion Detection Approach Against the IFA Threat | 2025 | Early Access | Monitoring Prevention and mitigation Electronic mail Threat modeling Telecommunication traffic Fans Blocklists Security Road side unit Intrusion detection Interest Flooding Attack Named Data Network Network Traffic Monitoring Denial of Service Road Side Unit | On vehicular named data network (VNDN), Interest Flooding Attack (IFA) can exhaust the computing resources by sending a large number of malicious Interest packets, which leads to the failure of satisfying the legitimate requests and seriously hazards the operation of Internet of Vehicles (IoV). To solve this problem, this paper proposes a distributed network traffic monitoring-enabled multi-indicator detection and prevention approach for VNDN to detect and resist the IFA attacks. In order for facilitating this approach, a distributed network traffic monitoring layer based on road side unit (RSU) is constructed. With such a monitoring layer, a multi-indicator detection approach is designed, which consists of three indicators: information entropy, self-similarity, and singularity, whereby the thresholds are tweaked by the real-time density of traffic flow. Apart from the detection, a blacklisting based prevention approach is realized to mitigate the attack impact.We validate the proposed approach via prototyping it on our VNDN experimental platform using realistic parameters setting and leveraging the original NDN packet structure to corroborate the usage of the required Source ID for identifying the source of the Interest packet, which consolidates the practicability of the approach. The experimental results show that our multi-indicator detection approach has a greatly higher detection performance than those of using indicators individually, and the blacklisting-based prevention can effectively mitigate the attack impact as well. | 10.1109/TNSM.2025.3603630 |
| Huaide Liu, Fanqin Zhou, Yikun Zhao, Lei Feng, Zhixiang Yang, Yijing Lin, Wenjing Li | Autonomous Deployment of Aerial Base Station without Network-Side Assistance in Emergency Scenarios Based on Multi-Agent Deep Reinforcement Learning | 2025 | Early Access | Heuristic algorithms Disasters Optimization Estimation Wireless communication Collaboration Base stations Autonomous aerial vehicles Adaptation models Sensors Aerial base station deep reinforcement learning autonomous deployment emergency scenarios multi-agent systems | Aerial base station (AeBS) is a promising technology for providing wireless coverage to ground user equipment. Traditional methods of optimizing AeBS networks often rely on pre-known distribution models of ground user equipment. However, in practical scenarios such as natural disasters or temporary large-scale public events, the distribution of user clusters is often unknown, posing challenges for the deployment and application of AeBS. To adapt to complex and unknown user environments, this paper studies a method of estimating information from local to global and proposes a multi-agent AeBSs autonomous deployment algorithm based on deep reinforcement learning (DRL). This method attempts to dynamically deploy AeBS to autonomously identify hotspots by sensing user equipment signals without network-side assistance, providing a more comprehensive and intelligent solution for AeBS deployment. Simulation results indicate that our method effectively guides the autonomous deployment of AeBS in emergency scenarios, addressing the challenge of the lack of network-side assistance. | 10.1109/TNSM.2025.3603875 |
| Menna Helmy, Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad | Slicing for AI: An Online Learning Framework for Network Slicing Supporting AI Services | 2025 | Early Access | Artificial intelligence Training Resource management Network slicing Computational modeling Optimization Quality of service Ultra reliable low latency communication 6G mobile communication Heuristic algorithms Network slicing online learning resource allocation 6G networks optimization | The forthcoming 6G networks will embrace a new realm of AI-driven services that requires innovative network slicing strategies, namely slicing for AI, which involves the creation of customized network slices to meet Quality of Service (QoS) requirements of diverse AI services. This poses challenges due to time-varying dynamics of users’ behavior and mobile networks. Thus, this paper proposes an online learning framework to determine the allocation of computational and communication resources to AI services, to optimize their accuracy as one of their unique key performance indicators (KPIs), while abiding by resources, learning latency, and cost constraints. We define a problem of optimizing the total accuracy while balancing conflicting KPIs, prove its NP-hardness, and propose an online learning framework for solving it in dynamic environments. We present a basic online solution and two variations employing a pre-learning elimination method for reducing the decision space to expedite the learning. Furthermore, we propose a biased decision space subset selection by incorporating prior knowledge to enhance the learning speed without compromising performance and present two alternatives of handling the selected subset. Our results depict the efficiency of the proposed solutions in converging to the optimal decisions, while reducing decision space and improving time complexity. Additionally, our solution outperforms State-of-the-Art techniques in adapting to diverse environmental dynamics and excels under varying levels of resource availability. | 10.1109/TNSM.2025.3603391 |
| Erhe Yang, Zhiwen Yu, Yao Zhang, Helei Cui, Zhaoxiang Huang, Hui Wang, Jiaju Ren, Bin Guo | Joint Semantic Extraction and Resource Optimization in Communication-Efficient UAV Crowd Sensing | 2025 | Early Access | Sensors Autonomous aerial vehicles Optimization Semantic communication Data mining Feature extraction Resource management Accuracy Data models Data communication UAV crowd sensing semantic communication multi-scale dilated fusion attention reinforcement learning | With the integration of IoT and 5G technologies, UAV crowd sensing has emerged as a promising solution to overcome the limitations of traditional Mobile Crowd Sensing (MCS) in terms of sensing coverage. As a result, UAV crowd sensing has been widely adopted across various domains. However, existing UAV crowd sensing methods often overlook the semantic information within sensing data, leading to low transmission efficiency. To address the challenges of semantic extraction and transmission optimization in UAV crowd sensing, this paper decomposes the problem into two sub-problems: semantic feature extraction and task-oriented sensing data transmission optimization. To tackle the semantic feature extraction problem, we propose a semantic communication module based on Multi-Scale Dilated Fusion Attention (MDFA), which aims to balance data compression, classification accuracy, and feature reconstruction under noisy channel conditions. For transmission optimization, we develop a reinforcement learning-based joint optimization strategy that effectively manages UAV mobility, bandwidth allocation, and semantic compression, thereby enhancing transmission efficiency and task performance. Extensive experiments conducted on real-world datasets and simulated environments demonstrate the effectiveness of the proposed method, showing significant improvements in communication efficiency and sensing performance under various conditions. | 10.1109/TNSM.2025.3603194 |
| Andrea Detti, Ludovico Funari | Critical Limitations of the Least Outstanding Request Load Balancing Policy in Service Meshes for Large-Scale Microservice Applications | 2025 | Early Access | Microservice architectures Servers Load management Load modeling Analytical models Degradation Collaboration Containers Training Security Cloud Computing Microservices Applications Service Meshes Load Balancing | Service meshes are becoming pivotal software frameworks for managing communication among microservices in distributed applications. Each microservice in a service mesh is paired with an L7 sidecar proxy, which intercepts incoming and outgoing requests to provide enhanced observability, traffic management, and security. These sidecar proxies use application-level load balancing policies to route outgoing requests to available replicas of destination microservices. A widely adopted policy is the Least Outstanding Request (LOR), which directs requests to the replica with the fewest outstanding requests. While LOR effectively reduces latency in applications with a small number of replicas, our comprehensive investigation– combining analytical, simulation, and experimental methods– uncovers a novel and critical issue for large-scale microservice applications: the performance of LOR significantly degrades as the number of microservice replicas increases, eventually converging to the performance of a random load balancing policy. To recover LOR performance at scale, we propose an opensource solution named Proxy-Service, tailored for microservice applications where load balancing incurs significantly lower resource demands than microservice execution. The core idea is to consolidate load balancing decisions per microservice into one or a few reverse proxies, transparently injected into the application. | 10.1109/TNSM.2025.3593870 |
| José Santos, Bibin V. Ninan, Bruno Volckaert, Filip De Turck, Mays Al-Naday | A Comprehensive Benchmark of Flannel CNI in SDN/non-SDN Enabled Cloud-Native Environments | 2025 | Early Access | Containers Benchmark testing IP networks Microservice architectures Encapsulation Complexity theory Software defined networking Packet loss Overlay networks Network interfaces Containers Container Network Interfaces Network Function Virtualization Benchmark Cloud-Native Software-Defined Networking | The emergence of cloud computing has driven advancements in software virtualization, particularly microservice containerization. This in turn led to the development of Container Network Interfaces (CNIs) such as Flannel to connect microservices over a network. Despite their objective to provide connectivity, CNIs have not been adequately benchmarked when containers are connected over an external network. This creates uncertainty about the operation reliability of CNIs in distributed edge-cloud ecosystems. Given the multitude of available CNIs and the complexity of comparing different ones, this paper focuses on the widely adopted CNI, Flannel. It proposes the design of novel benchmarks of Flannel across external networks, Software Defined Networking (SDN)-based and non-SDN, characterizing two of the key backend types of Flannel: User Datagram Protocol (UDP) and Virtual Extensible LAN (VXLAN). Unlike existing benchmarks, this study analysis the overhead introduced by the external network and the impact of network disruptions. The paper outlines the systematic approach to benchmarking a set of Key Performance Indicators (KPIs), including: speed, latency and throughput. A variety of network disruptions have been induced to analyse their impact on these KPIs, including: delay, packet loss, and packet corruption. The results show that VXLAN consistently outperforms UDP, offering superior bandwidth with efficient resource consumption, making it more suitable for production environments. In contrast, the UDP backend is suitable for real-time video streaming applications due to its higher data rate and lower jitter, though it requires higher resource utilization. Moreover, the results show less variation in KPIs over SDN, compared to non-SDN. The benchmark data are made publicly available in an open-source repository, enabling researchers to replicate the experiments, and potentially extend the study to other CNIs. This work contributes to the network management domain by providing an extensive benchmark study on container networking highlighting the main advantages and disadvantages of current technologies. | 10.1109/TNSM.2025.3602607 |
| Lu Cao, Lin Yao, Weizhe Zhang, Yao Wang | HeavyFinder: A Lightweight Network Measurement Framework for Detecting High-Frequency Elements in Skewed Data Streams | 2025 | Early Access | Accuracy Memory management Resource management Frequency measurement Real-time systems Arrays Radio spectrum management Optimization Hash functions Data mining Network measurements highfrequency elements skewed data streams sketch heavy entries | Skewed data streams are characterized by uneven distributions in which a small fraction of elements occur with much higher frequency than others. The detection of these high-frequency elements presents significant practical challenges, particularly under stringent memory constraints, as existing detection techniques have typically relied on predefined thresholds that require significant memory usage. However, this approach is highly inefficient since not all elements require equal storage space. To address these limitations, we introduce HeavyFinder (HF), a novel lightweight network measurement architecture designed to detect high-frequency elements in skewed data. HF employs a threshold-free update strategy that enables dynamic adaptation to variable data, thereby providing greater flexibility for tracking high-frequency elements without requiring fixed thresholds. Furthermore, an included memory-light strategy enables high accuracy for non-uniform distributions, even with limited memory allocation. Experimental results showed that HF significantly improved performance in four query tasks, producing an accuracy of 99.81% when identifying the top-k elements. The average absolute error (AAE) was also reduced to 10-4 using only 100KB of memory, which was significantly lower than that of conventional methods. | 10.1109/TNSM.2025.3604523 |
| Ehsan Nowroozi, Mohammadreza Mohammadi, Ahmad Rahdari, Rahim Taheri, Mauro Conti | A Random Deep Feature Selection Approach to Mitigate Transferable Adversarial Attacks | 2025 | Early Access | Training Resource description framework Data models Vectors Robustness Computational modeling Training data Feature extraction Computer vision Computer architecture Adversarial machine learning poisoning attacks backdoor attacks exploratory attacks transferability deep learning network security | Machine learning and deep learning are transformative forces reshaping our networks, industries, services, and ways of life. However, the susceptibility of these intelligent systems to adversarial attacks remains a significant issue. On the one hand, recent studies have demonstrated the potential transferability of adversarial attacks across diverse models. On the other hand, existing defense mechanisms are vulnerable to advanced attacks or are often limited to certain attack types. This study proposes a random deep feature selection approach to mitigate such transferability and improve the robustness of models against adversarial manipulations. Our approach is designed to strengthen deep models against poisoning (e.g., label flipping) and exploratory (e.g., DeepFool, BIM, FGSM, I-FGSM, L-BFGS, C&W, JSMA, and PGD) attacks that are applied in both the training and testing stages, and Transfer Learning-Based Adversarial Attacks. We consider scenarios involving perfect and semi-knowledgeable attackers. The performance of our approach is evaluated through extensive experiments on the renowned UNSW-NB15 dataset, including both real-world and synthetic data, covering a wide range of modern attack behaviors and benign activities. The results indicate that our approach boosts the effectiveness of the target network to over 80% against labelflipping poisoning attacks and over 60% against all major types of exploratory attacks. | 10.1109/TNSM.2025.3594253 |
| Mario Di Mauro | Performance Assessment of Multi-Class 5G Chains: A Non-Product-Form Queueing Networks Approach | 2025 | Early Access | Delays 5G mobile communication Queueing analysis MONOS devices Load modeling Data models Calculus Resource management Quality of service Optimization Performance assessment of 5G chains Queueing Networks Multi-class SFC models | This work presents a performance assessment of 5G Service Function Chains (SFCs) by examining and comparing two architectural models. The first is the Mono chain model, which relies on a single path for data processing through a series of 5G nodes, ensuring straightforward and streamlined service delivery. The second is the Poly (or sliced) chain model, which leverages multiple paths for data flow, enhancing load balancing and resource distribution across nodes to improve network resilience. To evaluate the performance of these models, we introduce a performance indicator that captures two critical stages: the time required for user registration to the 5G infrastructure and the time needed for Protocol Data Unit (PDU) session establishment. From a performance standpoint, these stages are deemed crucial by the European Telecommunications Standards Institute (ETSI), as they can adversely affect both objective and subjective network parameters. Using a non-product-form queueing network approach, we develop an algorithm named ChainPerfEval, which accurately estimates the proposed performance indicator. This approach outperforms standard queueing network models, where the exponential assumption of inter-arrival and/or service times may lead to an inaccurate estimation of the performance indicator. An extensive experimental campaign is conducted using an Open5GS testbed to simulate real-world traffic scenarios, categorizing 5G flows into three priority classes: gold (high priority), silver (moderate priority), and bronze (low priority). The results provide significant insights into the trade-offs between the Mono and Poly chain models, particularly in terms of resource allocation strategies and their impact on SFC performance. Ultimately, this comprehensive analysis offers valuable and actionable recommendations for network operators seeking to optimize service delivery in multi-class 5G environments, ensuring enhanced user experience and efficient resource utilization. | 10.1109/TNSM.2025.3588304 |
| Beibei Li, Wei Hu, Yiwei Li, Lemei Da | Modeling and Maximizing Network Reliability in Large Scale Infrastructure Networks: A Heat Conduction Model Perspective | 2025 | Early Access | Large infrastructure networks play a crucial role in modern society, supporting various aspects of our daily lives. Reliability of such networks is a pivotal research conundrum, which has attracted intensive research interests in recent years. However, most of them focus on protecting critical nodes or optimizing the network topology through linear models to measure reliability, while nonlinear models for improving network reliability are rarely investigated. The major challenges are the significant computational complexity and damage to the original network structure caused by nonlinear methods. Inspired by the similarity in dynamics between heat conduction systems and infrastructure networks, we propose a nonlinear model that maps an infrastructure network to a nonlinear heat conduction system for the purpose of measuring and enhancing network reliability. We introduce a new evaluating indicator of network reliability based on community irrelevance. Additionally, we propose a new Edge Addition (EA) method called Modularity Addition (MA) that maximizes network reliability by adding multiple edges during each iteration and substantially reduces computational overhead. Experimental results have demonstrated that our MA method outperforms existing algorithms. Specifically, in comparison to the widely used EA and Posteriorly Adding (PA) algorithms, the proposed MA method improves network reliability by up to 13.2%. It reduces the number of edges added to the network by 72%. Moreover, the MA method offers a 6.8-fold reduction in time complexity compared to existing methods, highlighting its efficiency and scalability. Our approach is validated on both synthetic and real-world networks, showcasing its significant value on enhancing the robustness of complex infrastructure systems. | 10.1109/TNSM.2025.3596212 | |
| Ahan Kak, Van-Quan Pham, Huu-Trung Thieu, Nakjung Choi | HexRAN: A Programmable Approach to Open RAN Base Station System Design | 2025 | Early Access | Open RAN Base stations Protocols 3GPP Computer architecture Telemetry Cellular networks Network slicing Wireless networks Prototypes Network Architecture Cellular Systems Radio Access Networks O-RAN Network Slicing Network Programmability | In recent years, the radio access network (RAN) domain has seen significant changes with increased virtualization and softwarization, driven by the Open RAN (O-RAN) movement. However, the fundamental building block of the cellular network, i.e., the base station, remains unchanged and ill-equipped to handle this architectural evolution. In particular, there exists a general lack of programmability and composability along with a protocol stack that grapples with the intricacies of the 3GPP and O-RAN specifications. Recognizing the need for an “O-RAN-native” approach to base station design, this paper introduces HexRAN– a novel base station architecture characterized by key features relating to RAN disaggregation and composability, 3GPP and O-RAN protocol integration and programmability, robust controller interactions, and customizable RAN slicing. Furthermore, the paper also includes a concrete systems-level prototype and comprehensive experimental evaluation of HexRAN on an over-the-air testbed. The results demonstrate that HexRAN uses only 8% more computing resources compared to the baseline, while managing twice the user plane traffic, delivering control plane processing latency of under 120μs, and achieving 100% processing reliability. This underscores the scalability and performance advantages of the proposed architecture. | 10.1109/TNSM.2025.3600587 |
| Xin Tong, Shike Li, Ni Jin, Baojiang Cui | Fed-RWM: A Robust WaterMarking Approach for Federated Learning Model Ownership Protection | 2025 | Early Access | The interconnected nature of the Internet of Things (IoT) significantly enhances the efficiency of industries such as smart manufacturing, but it also raises concerns about data privacy. Federated learning (FL) utilizes an edge-cloud collaborative mode that shares models in the cloud instead of data, effectively mitigating data privacy leakage in edge IoT devices. However, FL suffers from the risk of model leakage, and both the cloud and the edge may illegally copy and sell the model, infringing on model ownership. To prevent such misbehavior, it is essential to design a robust method for verifying the model ownership. In this paper, we propose Fed-RWM, a novel watermarking method for FL models that provides robust ownership verification and avoids both edge and cloud leakage of watermark information. Fed-RWM trains the watermark by sharing parameters with an additional model and incorporates a watermark recovery method during verification to counter complex watermark removal attacks, which enhances the verification robustness. Fed-RWM introduces a new training paradigm that performs continuous watermarking training at the FL task initiator, preventing access to watermark information at both the edge and the cloud. The experimental results demonstrate that Fed-RWM performs well in model ownership verification and fidelity, is robust to different watermark removal attacks, and can provide reliable protection for federated learning models. | 10.1109/TNSM.2025.3596692 | |
| Maruthi V, Kunwar Singh | Enhancing Security and Privacy of IoMT Data for Unconscious Patient With Blockchain | 2025 | Early Access | Cryptography Security Polynomials Public key Medical services Interpolation Encryption Data privacy Blockchains Privacy Internet of Medical Things Inter-Planetary File System Proxy re-encryption+ Threshold Proxy re-encryption+ Blockchain Non-Interactive Zero Knowledge Proof Schnorr ID protocol | IoMT enables continuous monitoring through connected medical devices, producing real-time health data that must be protected from unauthorised access and tampering. Blockchain ensures this security with its decentralised, tamper-resistant, and access-controlled ledger. A critical challenge arises when patients are unconscious, making timely access to their IoMT data essential for emergency treatment. To address this, we have created and designed a novel Threshold Proxy Re-Encryption+ (TPRE+) framework that integrates threshold cryptography with unidirectional, non-transitive proxy re-encryption(PRE) with Shamir’s secret sharing to distribute re-encryption capabilities among multiple proxies, reducing single-point failure and collision risks. Our contributions are threefold: (i) We first proposed a semantically secure TPRE+ scheme with Shamir-secret sharing, (ii) Construction of an IND-CCA secure TPRE+ scheme, and (iii) Development of a secure, distributed medical record storage system for unconscious patients, combining blockchain infrastructure, IPFS-based encrypted storage, and our proposed TPRE+ schemes. This integration ensures confidentiality, integrity, and fault-tolerant access to critical patient data, enabling secure and efficient deployment in real-world emergency healthcare scenarios. | 10.1109/TNSM.2025.3602117 |
| Amit Kumar Bhuyan, Hrishikesh Dutta, Subir Biswas | Top-k Multi-Armed Bandit Learning for Content Dissemination in Swarms of Micro-UAVs | 2025 | Early Access | This paper presents a Micro-Unmanned Aerial Vehicle (UAV)-enhanced content management system for disaster scenarios where communication infrastructure is generally compromised. Utilizing a hybrid network of stationary and mobile Micro-UAVs, this system aims to provide crucial content access to isolated communities. In the developed architecture, stationary anchor UAVs, equipped with vertical and lateral links, serve users in individual disaster-affected communities. and mobile microferrying UAVs, with enhanced mobility, extend coverage across multiple such communities. The primary goal is to devise a content dissemination system that dynamically learns caching policies to maximize content accessibility to users left without communication infrastructure. The core contribution is an adaptive content dissemination framework that employs a decentralized Top-k Multi-Armed Bandit learning approach for efficient UAV caching decisions. This approach accounts for geo-temporal variations in content popularity and diverse user demands. Additionally, a Selective Caching Algorithm is proposed to minimize redundant content copies by leveraging inter-UAV information sharing. Through functional verification and performance evaluation, the proposed framework demonstrates improved system performance and adaptability across varying network sizes, micro-UAV swarms, and content popularity distributions. | 10.1109/TNSM.2025.3602646 | |
| Chongxiang Yao, Chen Guo, Weiguang Zhang, Shengbo Chen | Efficient Optimization Algorithm for Virtual Backbone in Wireless Sensor Networks by Removing Redundant Dominators | 2025 | Early Access | Approximation algorithms Wireless sensor networks Optimization Upper bound Classification algorithms Redundancy Energy consumption Electronic mail Storms Simulation Virtual backbone connected dominating set redundant dominator wireless sensor network approximation algorithm | Wireless sensor networks (WSNs) often utilize virtual backbones (VBs) to optimize routing and reduce energy consumption. The effectiveness of this optimization largely depends on the size of the VB, with smaller VBs offering better performance. In WSNs, VBs are typically modeled as connected dominating sets (CDSs) within unit disk graphs (UDGs). However, existing approximation algorithms for constructing the minimum connected dominating set (MCDS) often introduce redundant dominators, leading to inflated CDSs. To tackle this issue, in this paper, we propose a general CDS optimization algorithm named OP-CDS, designed specifically to minimize redundancies. Theoretical analysis shows that the size of the optimized CDS is bounded by α∙opt+δ-k+1, where α∙opt+δ represents the upper bound of the unoptimized CDS, and k denotes the number of OP-CDS iterations. Additionally, extensive simulations demonstrate that OP-CDS can effectively optimize the CDS generated by state-of-the-art algorithms with minimal time consumption. | 10.1109/TNSM.2025.3606864 |
| Anna Karanika, Rui Yang, Xiaojuan Ma, Jiangran Wang, Shalni Sundram, Indranil Gupta | There is More Control in Egalitarian Edge IoT Meshes | 2025 | Early Access | Internet of Things Smart devices Intelligent sensors Smart agriculture Smart buildings Monitoring Mesh networks Clouds Costs Thermostats mesh IoT edge control plane routines faulttolerance | While mesh networking for edge settings (e.g., smart buildings, farms, battlefields, etc.) has received much attention, the layer of control over such meshes remains largely centralized and cloud-based. This paper focuses on applications with commonplace sense-trigger-actuate (STA) workloads—like the abstraction of routines popular now in smart homes, but applied to larger-scale edge IoT deployments. We present CoMesh, which tackles the challenge of building a decentralized mesh-based control plane for local, non-cloud, and hubless management of sense-trigger-actuate applications. CoMesh builds atop an abstraction called the coterie, which spreads STA load in a finegrained way both across space and across time. A coterie uses a novel combination of techniques such as zero-message-exchange protocols (for fast proactive member selection), quorum-based agreement, and locality-sensitive hashing. We analyze and theoretically prove safety and liveness properties of CoMesh. Our evaluation with both a Raspberry Pi-4 deployment and largerscale simulations, using real building maps and real routine workloads, shows that CoMesh is load-balanced, fast, faulttolerant, and scalable. | 10.1109/TNSM.2025.3608796 |
| M. Wasim Abbas Ashraf, Shivanshu Shrivastava, Om Jee Pandey, Arvind R. Singh, Arif Raza | DRL-Driven Optimal User Association and Load Balancing in Hybrid RF/LiFi Based IoT Systems | 2025 | Early Access | Radio frequency Internet of Things Light fidelity Load management Throughput Resource management Real-time systems Heuristic algorithms Optimization Deep reinforcement learning Light Fidelity (LiFi) Hybrid RF/LiFi Internet of Things (IoT) Deep Reinforcement Learning (DRL) Deep Joint Hybrid System (DJHS) User Association Transmission Power Allocation Load Balancing Throughput | The proliferation of Internet of Things (IoT) devices has raised considerable difficulties in the identification of users, optimization of power, and load balancing in hybrid RF/LiFi networks. As interconnection among devices increases, ensuring optimal performance while managing network resources efficiently becomes quite complex. This complexity arises due to accommodating the possibly diverse users’ needs, fluctuating channel conditions, and varying interference levels, all necessitating sophisticated management solutions to provide seamless connectivity and dependable communication. To tackle these issues, a deep joint hybrid system (DJHS) technique is presented, which employs proximal policy optimization (PPO), a cutting-edge deep reinforcement learning (DRL) algorithm. DJHS aims to effectively handle the intricate problems surrounding user association and load balancing while optimizing power usage in dynamic contexts. DJHS continuously updates its approach based on real-time network data through adaptive learning methods, allowing it to make intelligent decisions that improve overall system performance regarding data throughput and power optimization. Simulation results demonstrate that DJHS outperforms existing approaches such as sac, a2c, td3, and trpo regarding crucial metrics, including data rate and power transmission. Notably, DJHS’s ability to adjust to variations in signal-to-interference-plus-noise ratio (SINR) allows for enhanced resource allocation and network stability. This flexibility ensures that users receive optimal service even in changing conditions, enhancing the overall user experience. | 10.1109/TNSM.2025.3607433 |