Last updated: 2025-11-07 05:01 UTC
All documents
Number of pages: 150
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Menna Helmy, Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad | Slicing for AI: An Online Learning Framework for Network Slicing Supporting AI Services | 2025 | Early Access | Artificial intelligence Training Resource management Network slicing Computational modeling Optimization Quality of service Ultra reliable low latency communication 6G mobile communication Heuristic algorithms Network slicing online learning resource allocation 6G networks optimization | The forthcoming 6G networks will embrace a new realm of AI-driven services that requires innovative network slicing strategies, namely slicing for AI, which involves the creation of customized network slices to meet Quality of Service (QoS) requirements of diverse AI services. This poses challenges due to time-varying dynamics of users’ behavior and mobile networks. Thus, this paper proposes an online learning framework to determine the allocation of computational and communication resources to AI services, to optimize their accuracy as one of their unique key performance indicators (KPIs), while abiding by resources, learning latency, and cost constraints. We define a problem of optimizing the total accuracy while balancing conflicting KPIs, prove its NP-hardness, and propose an online learning framework for solving it in dynamic environments. We present a basic online solution and two variations employing a pre-learning elimination method for reducing the decision space to expedite the learning. Furthermore, we propose a biased decision space subset selection by incorporating prior knowledge to enhance the learning speed without compromising performance and present two alternatives of handling the selected subset. Our results depict the efficiency of the proposed solutions in converging to the optimal decisions, while reducing decision space and improving time complexity. Additionally, our solution outperforms State-of-the-Art techniques in adapting to diverse environmental dynamics and excels under varying levels of resource availability. | 10.1109/TNSM.2025.3603391 |
| Kai Cheng, Weidong Tang, Lintao Tan, Jing Yang, Jia Chen | SLNALog: A log Anomaly Detection Scheme Based on Swift Layer Normalization Attention Mechanism for Next-Generation Power Communication Networks | 2025 | Early Access | Anomaly detection Semantics Smart grids Feature extraction Next generation networking Data models Maintenance Computational modeling Vectors Power system stability Log anomaly detection deep learning binary classification smart grid security | Log anomaly detection is a critical first line of defense for securing next-generation power communication networks against malicious attacks.serves as the initial line of defense for safeguarding the security of the next-generation power communication networks, which can protect it from attackers invasion damage. However, in industrial settingsin the industrial Internet domain, limitedthe scarcity of computational resources on edge devices result in longin devices leads to prolonged inference times for anomaly detection models, hindering the timely detection of anomalous log activities.impeding the prompt identification of unusual activities logged within these devices. To address these challenges, we propose SLNALog, an anomaly detection workflow centered around a Swift Layer Normalization Attention module.the aforementioned issues, the SLNALog anomaly detection workflow has been proposed. Its core comprises a Swift Layer Normalization Attention module. This module leveragesis based on linear attention to optimizeand optimizes the key-value interactions found in traditional attention mechanisms, thereby reducing the computational complexity of the detection process. This optimization reduces the complexity of log anomaly detection. As a result, the model’s receptive field for log data is expanded, and the efficiency of log anomaly detection is improved. ExperimentalThe experimental results on the HDFS and BGL datasets demonstrate the superiority of our approach.indicate that it achieves higher accuracy. SLNALog achieves higher accuracy, with F1-scores increasing by 0.08 and 0.04, respectively, while reducing detection time by 5.7% and 28.3%. The model provides an effective solution to enhance the cyber security of smart grids. Furthermore, the workflow incorporates an LLM-based log template analysis module and an Adapter-based model tuning module to enhance the model’s generalization in real-world scenarios. The proposed model provides an effective solution for enhancing the cybersecurity of smart grids. | 10.1109/TNSM.2025.3605764 |
| Runze Wu, Kai Wang, Haobo Guo, Yuxin Liu, Wenting Wang, Jing Liu | Joint Optimization Algorithm for Multi-Dimensional Wireless Communication Resources Adapt to Deterministic Monitoring of Power Distribution Areas | 2025 | Early Access | Ultra reliable low latency communication Resource management Wireless communication Monitoring Quality of service Optimization Power distribution Power systems Data communication Radio spectrum management progressive superposition QoS resources allocation wireless communication new power system | With a growing penetration of distributed renewable energy, it is becoming increasingly evident that the monitoring service of power distribution area is characterized by multiple concurrency and time delay sensitivity. Correspondingly, massive amounts of differentiated data need to be collected, which dramatically increases the demand for technologies like ultra-reliable low-latency communication (URLLC) and enhanced mobile bandwidth (eMBB). However, due to limited wireless communication resource in power distribution areas, the major difficulty of improving monitoring certainty under the delay perspective lies in efficient and reasonable joint allocation of the multi-dimensional resources. To this end, a joint optimization algorithm for multi-dimensional wireless communication resources adapt to deterministic monitoring of power distribution areas is proposed. Firstly, the joint optimization problem for spectrum resource blocks (RBs) and power scheduling is constructed, and then it is decoupled in layers consisted of eMBB-RB scheduling layer, URLLC-RB scheduling layer and URLLC-power allocation layer and solved by the greedy strategy, improved plant growth simulation algorithm and rigorous theory analysis combined with block coordinate descent. The simulation results validate that the proposed algorithm swiftly and rationally allocates multi-dimensional wireless communication resources to massive and concurrent services, which satisfies the diverse QoS demands and improves the monitoring certainty of the power distribution areas. | 10.1109/TNSM.2025.3605601 |
| Bo Mi, Hangcheng Zou, Darong Huang | FedPP: Privacy-Enhanced Federated Learning for Parameter Aggregation in Heterogeneous Intelligent Connected Vehicles | 2025 | Early Access | Federated learning Training Data models Computational modeling Accuracy Homomorphic encryption Privacy Differential privacy Autonomous vehicles Reliability ICVs Federated Learning Heterogeneity Privacy Preserving Poisoning Attack Resistant | With the popularization of intelligent connected vehicles (ICVs), traffic information sources are becoming ubiquitous and diverse. Given the inherent conflict between data value extraction and privacy protection, federated learning (FL) has emerged as a powerful tool for developing application models with certain generalization capability. Although FL ensures that data remains local, the parameters used for aggregation are still vulnerable to attacks, such as reverse engineering or membership inference. Methods based on homomorphic encryption or differential privacy can alleviate this issue to some extent; however, they also lead to a reduction in training performance. Furthermore, since the data collected by ICVs generally exhibit non-independent and identically distributed (non-IID) characteristics, ensuring model reliability becomes quite challenging. This paper presents a private-parameter-based federated learning method, FedPP, which integrates a Gaussian mechanism with multi-key homomorphic encryption to prevent parameter leakage while eliminating noise disturbance. By sorting and selecting the parameters to be aggregated, this approach not only demonstrates improved generalization capability under heterogeneous conditions but also effectively resists poisoning attacks. To evaluate the model, we constructed two non-IID traffic datasets using the Dirichlet distribution, which comprises a traffic sign dataset and a vehicle image dataset generated through the DALL-E model. Theoretical analysis and experimental results demonstrate that FedPP not only meets provable security under collaborative attacks but also exhibits higher model accuracy in heterogeneous vehicular network environments. | 10.1109/TNSM.2025.3605336 |
| Lu Cao, Lin Yao, Weizhe Zhang, Yao Wang | HeavyFinder: A Lightweight Network Measurement Framework for Detecting High-Frequency Elements in Skewed Data Streams | 2025 | Early Access | Accuracy Memory management Resource management Frequency measurement Real-time systems Arrays Radio spectrum management Optimization Hash functions Data mining Network measurements highfrequency elements skewed data streams sketch heavy entries | Skewed data streams are characterized by uneven distributions in which a small fraction of elements occur with much higher frequency than others. The detection of these high-frequency elements presents significant practical challenges, particularly under stringent memory constraints, as existing detection techniques have typically relied on predefined thresholds that require significant memory usage. However, this approach is highly inefficient since not all elements require equal storage space. To address these limitations, we introduce HeavyFinder (HF), a novel lightweight network measurement architecture designed to detect high-frequency elements in skewed data. HF employs a threshold-free update strategy that enables dynamic adaptation to variable data, thereby providing greater flexibility for tracking high-frequency elements without requiring fixed thresholds. Furthermore, an included memory-light strategy enables high accuracy for non-uniform distributions, even with limited memory allocation. Experimental results showed that HF significantly improved performance in four query tasks, producing an accuracy of 99.81% when identifying the top-k elements. The average absolute error (AAE) was also reduced to 10-4 using only 100KB of memory, which was significantly lower than that of conventional methods. | 10.1109/TNSM.2025.3604523 |
| Frkei Saleh, Abraham O. Fapojuwo, Diwakar Krishnamurthy | eSlice: Elastic Inter-Slice Resource Allocation for Smart City Applications | 2025 | Early Access | Resource management Smart cities 5G mobile communication Ultra reliable low latency communication Dynamic scheduling Substrates Network slicing Heuristic algorithms Real-time systems Vehicle dynamics Inter-slice Network slicing Smart city applications Dynamic Resource allocation | Network slicing is a fundamental enabler for the advancement of fifth generation (5G) and beyond 5G (B5G) networks, offering customized service-level agreements (SLAs) for distinct slices such as enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communication (URLLC). However, smart city applications often require multiple slices concurrently, posing significant challenges in resource allocation, service isolation, and maintaining performance guarantees. This paper presents eSlice, an elastic inter-slice resource allocation mechanism specifically designed to address the dynamic requirements of smart city applications. eSlice organizes applications into hierarchical slices, leveraging cloud-native resource scaling to dynamically adapt to real-time demands. It integrates two novel algorithms: the Proactive eSlice Allocation Algorithm (PeSAA), which ensures the fair distribution of resources across the substrate network, and the Reactive eSlice Allocation Algorithm (ReSAA), which employs Multi-Agent Reinforcement Learning (MARL) to dynamically coordinate, reallocate, and recover unused resources as network conditions evolve. Experimental results demonstrate that eSlice significantly outperforms existing methods, achieving 94.3% resource utilization in simulation-based experiments under constrained urban-scale scenarios, providing a robust solution for dynamic resource management in 5G-enabled smart city networks. | 10.1109/TNSM.2025.3604352 |
| Cheng Long, Haoming Zhang, Zixiao Wang, Yiming Zheng, Zonghui Li | FastScheduler: Polynomial-Time Scheduling for Time-Triggered Flows in TSN | 2025 | Early Access | Job shop scheduling Network topology Dynamic scheduling Real-time systems Heuristic algorithms Delays Ethernet Schedules Deep reinforcement learning Training Time-Sensitive Network Online Scheduling Algorithm Industrial Control | Time-Sensitive Networking (TSN) has emerged as a promising network paradigm for time-critical applications, such as industrial control, where flow scheduling is crucial to ensure low latency and determinism. As production flexibility demands increase, network topology and flow requirements may change, necessitating more efficient TSN scheduling algorithms to guarantee real-time and deterministic data transmission. In this work, we present FastScheduler, a polynomial-time, deterministic TSN scheduler, which can schedule thousands of Time-Triggered (TT) flows within arbitrary network topologies. The key innovations of FastScheduler include an Equivalent Reduction Technique to simplify the generic model while preserving the feasible scheduling space, a Deterministic Heuristic Strategy to ensure a consistent and reproducible scheduling process, and a Polynomial-Time Scheduling Algorithm to perform dynamic and real-time scheduling of periodic TT flows. Extensive experiments on various topologies show that FastScheduler can effectively simplify the model, reducing variables/constraints by 35%/62%, and schedule 1,000 TT flows in subsecond time. Furthermore, it runs 2/3 orders of magnitude faster and improves the schedulability by 12%/20% compared to heuristic/deep reinforcement learning-based methods. FastScheduler is well-suited for the dynamic requirements of industrial control networks. | 10.1109/TNSM.2025.3603844 |
| Wenjun Fan, Na Fan, Junhui Zhang, Jia Liu, Yifan Dai | Securing VNDN With Multi-Indicator Intrusion Detection Approach Against the IFA Threat | 2025 | Early Access | Monitoring Prevention and mitigation Electronic mail Threat modeling Telecommunication traffic Fans Blocklists Security Road side unit Intrusion detection Interest Flooding Attack Named Data Network Network Traffic Monitoring Denial of Service Road Side Unit | On vehicular named data network (VNDN), Interest Flooding Attack (IFA) can exhaust the computing resources by sending a large number of malicious Interest packets, which leads to the failure of satisfying the legitimate requests and seriously hazards the operation of Internet of Vehicles (IoV). To solve this problem, this paper proposes a distributed network traffic monitoring-enabled multi-indicator detection and prevention approach for VNDN to detect and resist the IFA attacks. In order for facilitating this approach, a distributed network traffic monitoring layer based on road side unit (RSU) is constructed. With such a monitoring layer, a multi-indicator detection approach is designed, which consists of three indicators: information entropy, self-similarity, and singularity, whereby the thresholds are tweaked by the real-time density of traffic flow. Apart from the detection, a blacklisting based prevention approach is realized to mitigate the attack impact.We validate the proposed approach via prototyping it on our VNDN experimental platform using realistic parameters setting and leveraging the original NDN packet structure to corroborate the usage of the required Source ID for identifying the source of the Interest packet, which consolidates the practicability of the approach. The experimental results show that our multi-indicator detection approach has a greatly higher detection performance than those of using indicators individually, and the blacklisting-based prevention can effectively mitigate the attack impact as well. | 10.1109/TNSM.2025.3603630 |
| Huaide Liu, Fanqin Zhou, Yikun Zhao, Lei Feng, Zhixiang Yang, Yijing Lin, Wenjing Li | Autonomous Deployment of Aerial Base Station without Network-Side Assistance in Emergency Scenarios Based on Multi-Agent Deep Reinforcement Learning | 2025 | Early Access | Heuristic algorithms Disasters Optimization Estimation Wireless communication Collaboration Base stations Autonomous aerial vehicles Adaptation models Sensors Aerial base station deep reinforcement learning autonomous deployment emergency scenarios multi-agent systems | Aerial base station (AeBS) is a promising technology for providing wireless coverage to ground user equipment. Traditional methods of optimizing AeBS networks often rely on pre-known distribution models of ground user equipment. However, in practical scenarios such as natural disasters or temporary large-scale public events, the distribution of user clusters is often unknown, posing challenges for the deployment and application of AeBS. To adapt to complex and unknown user environments, this paper studies a method of estimating information from local to global and proposes a multi-agent AeBSs autonomous deployment algorithm based on deep reinforcement learning (DRL). This method attempts to dynamically deploy AeBS to autonomously identify hotspots by sensing user equipment signals without network-side assistance, providing a more comprehensive and intelligent solution for AeBS deployment. Simulation results indicate that our method effectively guides the autonomous deployment of AeBS in emergency scenarios, addressing the challenge of the lack of network-side assistance. | 10.1109/TNSM.2025.3603875 |
| Saif Eddine Khelifa, Miloud Bagaa, Oussama Bekkouche, Messaoud Ahmed Ouameur, Adlen Ksentini | Extending WebAssembly for Deep-Learning Inference Across the Cloud Continuum | 2025 | Early Access | Webassembly Runtime Hardware Performance evaluation Biological system modeling Serverless computing Pipelines Computational modeling Training Optimization Cloud Edge Continuum (CECC) WebAssembly (WASM) Cloud Computing Edge Computing Edge DL | Recent advancements in serverless computing and the cloud-edge continuum have increased interest in WebAssembly (WASM). This technology enables portability and interoperability across diverse computing environments while achieving near-native execution speeds. Currently, WASM supports Single Instruction Multiple Data (SIMD), which allows for data-level parallelism that is particularly beneficial for vectorizable operations such as general matrix-matrix multiplication (GEMM) and convolutional layers. However, WASM lacks native integration with specialized hardware accelerators like GPUs, TPUs, and NPUs, as well as the ability to benefit from multi-core processing capabilities, which are critical for efficiently running Deep-Learning (DL) workloads. In contrast, despite these gains, WASM still lacks native support for heterogeneous accelerators such as GPUs, TPUs, and NPUs, as well as full multi-core parallelism capabilities that are critical for meeting the latency and throughput requirements of modern DL inference services. To bridge this gap, WASI-NN was developed, enabling WASM to integrate with external runtimes such as OpenVINO and ONNX Runtime, which leverage hardware acceleration. However, these current integrations often introduce performance overhead on certain devices, restricting their usability across the CECC. To address these challenges, we propose a new integration focusing on TVM as an external runtime for WASI-NN to enhance WASM’s performance and expand support to a broader range of devices. Additionally, we integrate this solution into Knative, a serverless framework, to provide a scalable and flexible platform for DL deployment. Using WASM technology, we evaluate our TVM-based solution through comparative studies. Results on AMD CPUs demonstrate the effectiveness of our approach, achieving 58% overall gain over other WASI-NN integrations (e.g., ONNX Runtime and OpenVINO) for CNN-based models while also achieving optimal performance on different platforms, such as Intel GPUs. These findings highlight the effectiveness of our solution. | 10.1109/TNSM.2025.3606343 |
| Erhe Yang, Zhiwen Yu, Yao Zhang, Helei Cui, Zhaoxiang Huang, Hui Wang, Jiaju Ren, Bin Guo | Joint Semantic Extraction and Resource Optimization in Communication-Efficient UAV Crowd Sensing | 2025 | Early Access | Sensors Autonomous aerial vehicles Optimization Semantic communication Data mining Feature extraction Resource management Accuracy Data models Data communication UAV crowd sensing semantic communication multi-scale dilated fusion attention reinforcement learning | With the integration of IoT and 5G technologies, UAV crowd sensing has emerged as a promising solution to overcome the limitations of traditional Mobile Crowd Sensing (MCS) in terms of sensing coverage. As a result, UAV crowd sensing has been widely adopted across various domains. However, existing UAV crowd sensing methods often overlook the semantic information within sensing data, leading to low transmission efficiency. To address the challenges of semantic extraction and transmission optimization in UAV crowd sensing, this paper decomposes the problem into two sub-problems: semantic feature extraction and task-oriented sensing data transmission optimization. To tackle the semantic feature extraction problem, we propose a semantic communication module based on Multi-Scale Dilated Fusion Attention (MDFA), which aims to balance data compression, classification accuracy, and feature reconstruction under noisy channel conditions. For transmission optimization, we develop a reinforcement learning-based joint optimization strategy that effectively manages UAV mobility, bandwidth allocation, and semantic compression, thereby enhancing transmission efficiency and task performance. Extensive experiments conducted on real-world datasets and simulated environments demonstrate the effectiveness of the proposed method, showing significant improvements in communication efficiency and sensing performance under various conditions. | 10.1109/TNSM.2025.3603194 |
| José Santos, Bibin V. Ninan, Bruno Volckaert, Filip De Turck, Mays Al-Naday | A Comprehensive Benchmark of Flannel CNI in SDN/non-SDN Enabled Cloud-Native Environments | 2025 | Early Access | Containers Benchmark testing IP networks Microservice architectures Encapsulation Complexity theory Software defined networking Packet loss Overlay networks Network interfaces Containers Container Network Interfaces Network Function Virtualization Benchmark Cloud-Native Software-Defined Networking | The emergence of cloud computing has driven advancements in software virtualization, particularly microservice containerization. This in turn led to the development of Container Network Interfaces (CNIs) such as Flannel to connect microservices over a network. Despite their objective to provide connectivity, CNIs have not been adequately benchmarked when containers are connected over an external network. This creates uncertainty about the operation reliability of CNIs in distributed edge-cloud ecosystems. Given the multitude of available CNIs and the complexity of comparing different ones, this paper focuses on the widely adopted CNI, Flannel. It proposes the design of novel benchmarks of Flannel across external networks, Software Defined Networking (SDN)-based and non-SDN, characterizing two of the key backend types of Flannel: User Datagram Protocol (UDP) and Virtual Extensible LAN (VXLAN). Unlike existing benchmarks, this study analysis the overhead introduced by the external network and the impact of network disruptions. The paper outlines the systematic approach to benchmarking a set of Key Performance Indicators (KPIs), including: speed, latency and throughput. A variety of network disruptions have been induced to analyse their impact on these KPIs, including: delay, packet loss, and packet corruption. The results show that VXLAN consistently outperforms UDP, offering superior bandwidth with efficient resource consumption, making it more suitable for production environments. In contrast, the UDP backend is suitable for real-time video streaming applications due to its higher data rate and lower jitter, though it requires higher resource utilization. Moreover, the results show less variation in KPIs over SDN, compared to non-SDN. The benchmark data are made publicly available in an open-source repository, enabling researchers to replicate the experiments, and potentially extend the study to other CNIs. This work contributes to the network management domain by providing an extensive benchmark study on container networking highlighting the main advantages and disadvantages of current technologies. | 10.1109/TNSM.2025.3602607 |
| Dániel Unyi, Ernő Rigó, Bálint Gyires-Tóth, Róbert Lovas | Explainable GNN-Based Approach to Fault Forecasting in Cloud Service Debugging | 2025 | Early Access | Debugging Microservice architectures Cloud computing Reliability Observability Computer architecture Graph neural networks Monitoring Probabilistic logic Fault diagnosis Cloud computing Software debugging Microservice architectures Deep learning Graph neural networks Explainable AI Fault prediction | Debugging cloud services is increasingly challenging due to their distributed, dynamic, and scalable nature. Traditional methods struggle to handle large state spaces and the complex interactions between microservices, making it difficult to diagnose failures and identify critical components. This paper presents a Graph Neural Network (GNN)-based approach that enhances cloud service debugging by predicting system-level fault probabilities and providing interpretable insights into failure propagation. Our method models microservice interactions as graphs, where failures propagate probabilistically. Using Markov Decision Processes (MDPs), we simulate failure behaviors, capturing the probabilistic dependencies that influence system reliability. The trained GNN not only predicts fault probabilities but also identifies the most failure-prone microservices and explains their impact. We evaluate our approach on various service mesh structures, including feature-enriched, tree-structured, and general directed acyclic graph (DAG) architectures. Results indicate that our method is effective in the operational phase of cloud services, enabling proactive debugging and targeted optimization. This work represents a step toward more interpretable, reliable, and maintainable cloud infrastructures. | 10.1109/TNSM.2025.3602223 |
| Ahan Kak, Van-Quan Pham, Huu-Trung Thieu, Nakjung Choi | HexRAN: A Programmable Approach to Open RAN Base Station System Design | 2025 | Early Access | Open RAN Base stations Protocols 3GPP Computer architecture Telemetry Cellular networks Network slicing Wireless networks Prototypes Network Architecture Cellular Systems Radio Access Networks O-RAN Network Slicing Network Programmability | In recent years, the radio access network (RAN) domain has seen significant changes with increased virtualization and softwarization, driven by the Open RAN (O-RAN) movement. However, the fundamental building block of the cellular network, i.e., the base station, remains unchanged and ill-equipped to handle this architectural evolution. In particular, there exists a general lack of programmability and composability along with a protocol stack that grapples with the intricacies of the 3GPP and O-RAN specifications. Recognizing the need for an “O-RAN-native” approach to base station design, this paper introduces HexRAN– a novel base station architecture characterized by key features relating to RAN disaggregation and composability, 3GPP and O-RAN protocol integration and programmability, robust controller interactions, and customizable RAN slicing. Furthermore, the paper also includes a concrete systems-level prototype and comprehensive experimental evaluation of HexRAN on an over-the-air testbed. The results demonstrate that HexRAN uses only 8% more computing resources compared to the baseline, while managing twice the user plane traffic, delivering control plane processing latency of under 120μs, and achieving 100% processing reliability. This underscores the scalability and performance advantages of the proposed architecture. | 10.1109/TNSM.2025.3600587 |
| Maruthi V, Kunwar Singh | Enhancing Security and Privacy of IoMT Data for Unconscious Patient With Blockchain | 2025 | Early Access | Cryptography Security Polynomials Public key Medical services Interpolation Encryption Data privacy Blockchains Privacy Internet of Medical Things Inter-Planetary File System Proxy re-encryption+ Threshold Proxy re-encryption+ Blockchain Non-Interactive Zero Knowledge Proof Schnorr ID protocol | IoMT enables continuous monitoring through connected medical devices, producing real-time health data that must be protected from unauthorised access and tampering. Blockchain ensures this security with its decentralised, tamper-resistant, and access-controlled ledger. A critical challenge arises when patients are unconscious, making timely access to their IoMT data essential for emergency treatment. To address this, we have created and designed a novel Threshold Proxy Re-Encryption+ (TPRE+) framework that integrates threshold cryptography with unidirectional, non-transitive proxy re-encryption(PRE) with Shamir’s secret sharing to distribute re-encryption capabilities among multiple proxies, reducing single-point failure and collision risks. Our contributions are threefold: (i) We first proposed a semantically secure TPRE+ scheme with Shamir-secret sharing, (ii) Construction of an IND-CCA secure TPRE+ scheme, and (iii) Development of a secure, distributed medical record storage system for unconscious patients, combining blockchain infrastructure, IPFS-based encrypted storage, and our proposed TPRE+ schemes. This integration ensures confidentiality, integrity, and fault-tolerant access to critical patient data, enabling secure and efficient deployment in real-world emergency healthcare scenarios. | 10.1109/TNSM.2025.3602117 |
| Yadi He, Zhou Wu, Linfeng Liu | Deep Learning Based Link Prediction Method Against Strong Sparsity for Mobile Social Networks | 2025 | Early Access | Feature extraction Social networking (online) Predictive models Deep learning Accuracy Network topology Data mining Recurrent neural networks Sparse matrices Computational modeling link prediction mobile social networks strong sparsity deep learning | Link prediction refers to the prediction of the potential relationships between nodes through exploring the evolution of the historical network topologies. In mobile social networks, the topologies change frequently due to the appearance/disappearance of nodes over time, and the links between nodes are typically very sparse (i.e., mobile social networks are with strong sparsity), which could affect the accuracy of link prediction in mobile social networks seriously. Therefore, this paper proposes a deep learning based Link Prediction Method against Strong Sparsity (LPMSS). LPMSS integrates the graph convolutional network output with encounter matrices to mitigate the negative impact of strong sparsity. Additionally, LPMSS employs the random negative sampling to alleviate the impact of imbalanced link distributions. We also adopt a Times module to capture the temporal topological changes in mobile social networks to enhance the prediction accuracy. Based on three datasets with different sparsity, extensive experiment results demonstrate that LPMSS can significantly improve AUC values while reducing MAE values, confirming its effectiveness in handling the link prediction in the mobile social networks with strong sparsity. | 10.1109/TNSM.2025.3601389 |
| Zhi-Bin Zuo, De-Min Wang, Mi-Mi Ma, Miao-Lei Deng, Chun Wang | An Adaptive Contention Window Backoff Scheme Differentiating Network Conditions Based on Deep Q-Learning Network | 2025 | Early Access | Throughput Data communication Wireless sensor networks Wireless networks Optimization Information science Multiaccess communication IEEE 802.11ax Standard Analytical models Wireless fidelity IEEE 802.11 Deep Q-Leaning Network Wireless Networks Deep Reinforcement Learning | In IEEE 802.11 networks, the Contention Window (CW) is a crucial parameter for wireless channel sharing among numerous stations, directly influencing overall network performance. In order to mitigate the performance degradation caused by the increasing number of stations in the network, we propose a novel adaptive CW backoff scheme, termed the ACWB-DQN algorithm. This algorithm leverages the Deep Q-Leaning Network (DQN) to explore a CW threshold, which is utilized as a boundary to differentiate the network load circumstances and learn the best configurations for different network conditions. When stations transmit data frames, different CW optimization strategies are employed based on station transmission status and the CW threshold. This approach aims to enhance network performance by adjusting the CW to increase transmission efficiency when there are fewer competing stations, and lower collision probabilities when there are more competing stations. Simulation results indicate that this approach can optimize station CW, reduce network collision rates, maintain constant throughput and significantly enhance the performance of Wi-Fi networks by means of adjusting the CW threshold according to real-time network conditions. | 10.1109/TNSM.2025.3600861 |
| Ziwang Wang, Huili Yan, Zhize Wu | Efficient Cross-Shard Blockchain Atomic Submission Scheme Based on Pledge Transactions | 2025 | Early Access | Blockchains Security Synchronization Protocols Throughput Scalability Complexity theory Batch production systems Sharding Relays Multi-shard Blockchains Atomic Commit Protocol Cross-shard Transactions Pledge Transactions | To address the substantial coordination overhead and communication latency inherent in cross-shard transaction commits within contemporary multi-shard blockchain architectures, this paper presents an efficient cross-shard atomic commit scheme (PledgeACS), grounded in the use of pledge transactions. The proposed scheme introduces a pledge transaction mechanism that employs a null recipient address and synchronizes these transactions across shards to an auxiliary chain through a global consensus protocol. Additionally, a cross-shard transaction protocol is developed, securing recipient funds via pledge transactions within the global consensus framework. Furthermore, a batch pledge transaction record and settlement protocol tailored for shard blockchains is designed, followed by rigorous feasibility analysis and performance evaluation. Experimental results indicate that the proposed scheme markedly decreases the user-perceived latency in cross-shard transactions and enhances security relative to current solutions, providing an innovative approach for achieving high throughput and scalability in blockchain systems. | 10.1109/TNSM.2025.3607349 |
| Zhuolun Li, Srijoni Majumdar, Evangelos Pournaras | Send Message to the Future? Blockchain-Based Time Machines for Decentralized Reveal of Locked Information | 2025 | Early Access | Cryptography Proposals Proof of Work Smart contracts Encryption Electronic voting Delays Robustness Accuracy Training Blockchain timed release cryptography secret sharing e-voting distributed system | Conditional information reveal systems automate the release of information upon meeting specific predefined conditions, such as a designated time in the future. This paper presents a new practical timed-release cryptography system that “sends messages in the future" with highly accurate decryption times. The core of the proposed system is a novel secret sharing scheme with verifiable information reveal, and a data sharing system is devised on smart contracts. This paper also introduces a breakthrough in the understanding, design, and application of conditional information reveal systems that are highly secure and decentralized. A complete evaluation portfolio is provided to this pioneering paradigm, including analytical results, a validation of its robustness in the Tamarin Prover and a performance evaluation of a real-world, open-source system prototype deployed across the globe. Using real-world election data, we also demonstrate the applicability of this innovative system in e-voting, illustrating its capacity to secure and ensure fair elections. | 10.1109/TNSM.2025.3604833 |
| Mounir Bensalem, Admela Jukan | Signaling Rate and Performance of RIS Reconfiguration and Handover Management in Next Generation Mobile Networks | 2025 | Early Access | Handover Protocols Analytical models Base stations Standards Reconfigurable intelligent surfaces Servers Long Term Evolution Closed-form solutions 3GPP RIS handover stochastic geometry static blockages self-blockage mobility models mmWave communications signaling protocols network management | We consider the problem of signaling rate and performance for control and management of reconfigurable intelligent surfaces (RISs) in next-generation mobile networks. To this end, we first analytically determine the rates of RIS reconfigurations and handover using a stochastic geometry network model. We derive closed-form expressions of these rates, while taking into account static obstacles (both known and unknown), self-blockage, RIS location density, and variations in the angle and direction of user mobility. Based on the derived rates, we analyze the signaling rates of a sample novel signaling protocol, which we propose as an extension of the current handover signaling protocol. We evaluate the signaling overhead due to RIS reconfigurations and the related energy consumption. We also provide a capacity planning analysis of the related RIS control plane server for its dimensioning in the network management system. The results quantify the impact of known and unknown obstacles on the RIS reconfiguration rate and the handover rate as a function of device density and mobility. We evaluate the scalability of the model, the related signaling overhead, energy efficiency, and server capacity in the control plane. To the best of our knowledge, this is the first analytical model to derive the closed form expressions of RIS reconfiguration rates, along with handover rates, and relate its statistical properties to the signaling rate and performance in next-generation mobile networks. | 10.1109/TNSM.2025.3608077 |