Last updated: 2025-08-29 05:01 UTC
All documents
Number of pages: 146
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Hamza Mokhtar, Xiaoqiang Di, Zhengang Jiang, Jing Chen, Abdelrhman Hassan | Efficient Spatiotemporal Prediction Transformer for Cooperative Satellite Remote Sensing | 2025 | Early Access | Satellites Remote sensing Data communication Telecommunication traffic Delays Real-time systems Network topology Accuracy Topology Spatiotemporal phenomena Remote sensing data Spatiotemporal Prediction Satellites network traffic Attention mechanism Encoder–decoder | Satellite remote sensing cooperation is essential for ensuring efficient data transmission in real-time applications. Network traffic prediction plays a crucial role in optimizing data transmission strategies, managing congestion, and reducing network latency. However, current research work on network traffic prediction frequently fails to fully exploit the complex spatial-temporal dependencies inherent in satellite network traffic. To address this limitation and improve the accuracy of long-term network traffic prediction, we propose an Efficient Spatiotemporal Prediction Transformer (ESPformer) for dynamic data transmission in cooperative satellite remote sensing. The proposed scheme not only considers propagation delays but also captures the temporal and spatial relationships among the network traffic. In particular, we design a spatial-temporal multi-head attention mechanism within an encoder-decoder transformer to capture the dynamic spatial dependencies and predict the network topology and its parameters, including traffic flow and bandwidth. By leveraging historical traffic data and the network traffic conditions, the model estimates expected queuing delays. Finally, based on the volume of traffic predicted and the changes in network conditions, we dynamically adjust the transmission strategies to maintain an efficient relaying mechanism. Therefore, our model enables an adaptive transmission strategy and offers an optimal delay reduction in real-time satellite data transmission. Extensive experiments conducted on four well-known traffic datasets demonstrate that the ESPformer significantly outperforms state-of-the-art baselines across all key performance metrics. | 10.1109/TNSM.2025.3580444 |
Boubakr Nour, Makan Pourzandi, Mourad Debbabi | THREATIFY: APT Threat Variant Generation Using Graph-Based Machine Learning | 2025 | Early Access | Security Planning Malware Vectors US Department of Homeland Security Machine learning Knowledge based systems Electronic mail Training Resilience Cybersecurity Cyber Threat Intelligence Variant Attack Generation Security Automation Threat Hunting | Ensuring cybersecurity in an ever-evolving threat landscape requires proactive identification and understanding of potential threats. Conventional detection and prediction solutions often fall short as they predominantly focus on known attack vectors. Advanced Persistent Threats (APTs) are becoming increasingly sophisticated and stealthy, resulting in new threat variants that are undetectable by these detection solutions. This paper introduces THREATIFY, a novel approach to predicting the most probable threat variants from existing APTs and previously seen attack campaigns. Our approach automates the generation of threat variants using graph-based machine learning based on the attack definition, past attack campaigns, and the security context between different techniques. THREATIFY leverages a security knowledge base of realistic attack scenarios and cybersecurity expertise to model, generate, and predict new forms of potential future threats by combining inter-(i.e. within the same APT attack) and intra-(i.e. between different APTs) techniques used by threat actors. It is crucial to emphasize that THREATIFY does not merely mix techniques from different APTs; rather, it constructs a logical and pragmatic kill chain based on their security context. THREATIFY is able to predict new attack steps, find relevant techniques to be substituted by, and merge APTs techniques in the current security context, and thus create previously unexplored threat variants. Our extensive experimental results demonstrate the efficacy of our approach in generating relevant and novel threat variants with a similarity score of 92%, uniqueness of 82%, validity of 95%, and reduction rate of 96%, including those that have never occurred before. | 10.1109/TNSM.2025.3581463 |
Yukun Zhu, Ruhui Ma, Luyao Liu, Jin Cao, Hui Li, Xiaosong Zhang | Ultra-High-Speed Terminal Secure Access and Intra-Group Authentication Scheme in Satellite Networks | 2025 | Early Access | Authentication Satellites Protocols Cryptography Switches Trajectory Performance evaluation Communication channels Polynomials Eavesdropping Ultra-high-speed terminal access authentication intra-group authentication satellite network | Ultra-High-Speed Terminals (UHSTs) can transport multiple Load Equipment (LEs) to precise locations, such as in a space station resupply mission scenario. In these scenarios, UHSTs need to access ground networks via satellite networks. However, since the connections between UHSTs and ground networks are established through insecure air interface channels, they are susceptible to attacks such as eavesdropping, impersonation, and other. Furthermore, owing to the high-speed mobility of UHSTs, they may not be able to connect successfully to the ground network through a single access point, which is possible for regular terminals. Additionally, UHST may also need to communicate with multiple LEs, which is also connected via insecure air interface channels. Therefore, this paper proposes a secure access and intra-group authentication scheme for UHSTs in satellite network scenarios. In the proposed scheme, based on pre-shared keys and trajectory prediction mechanisms, the UHST can successfully access the ground network through multiple access points and complete key establishment with the access points along its trajectory in advance. Using Shamir’s (t, n) Secret Sharing mechanism, UHST and multiple LEs can share a group key, ensuring secure intra-group data communication. Additionally, when one LE detaches from the UHST, the UHST can authorize the LE to access the ground network. Security and efficiency analysis shows that the proposed scheme achieves comprehensive security features with low overhead. | 10.1109/TNSM.2025.3581219 |
Erhe Yang, Zhiwen Yu, Yao Zhang, Helei Cui, Zhaoxiang Huang, Hui Wang, Jiaju Ren, Bin Guo | Joint Semantic Extraction and Resource Optimization in Communication-Efficient UAV Crowd Sensing | 2025 | Early Access | Sensors Autonomous aerial vehicles Optimization Semantic communication Data mining Feature extraction Resource management Accuracy Data models Data communication UAV crowd sensing semantic communication multi-scale dilated fusion attention reinforcement learning | With the integration of IoT and 5G technologies, UAV crowd sensing has emerged as a promising solution to overcome the limitations of traditional Mobile Crowd Sensing (MCS) in terms of sensing coverage. As a result, UAV crowd sensing has been widely adopted across various domains. However, existing UAV crowd sensing methods often overlook the semantic information within sensing data, leading to low transmission efficiency. To address the challenges of semantic extraction and transmission optimization in UAV crowd sensing, this paper decomposes the problem into two sub-problems: semantic feature extraction and task-oriented sensing data transmission optimization. To tackle the semantic feature extraction problem, we propose a semantic communication module based on Multi-Scale Dilated Fusion Attention (MDFA), which aims to balance data compression, classification accuracy, and feature reconstruction under noisy channel conditions. For transmission optimization, we develop a reinforcement learning-based joint optimization strategy that effectively manages UAV mobility, bandwidth allocation, and semantic compression, thereby enhancing transmission efficiency and task performance. Extensive experiments conducted on real-world datasets and simulated environments demonstrate the effectiveness of the proposed method, showing significant improvements in communication efficiency and sensing performance under various conditions. | 10.1109/TNSM.2025.3603194 |
Ke Chen, Li Zhang, Jihai Zhong | Space-Air-Ground Integrated Network (SAGIN) in Disaster Management: A Survey | 2025 | Early Access | Disasters Ad hoc networks Surveys Communication systems Satellites Disaster management Wireless communication Space-air-ground integrated networks Autonomous aerial vehicles Wireless sensor networks Search and Rescue (SAR) emergency communication network Space-Air-Ground Integrated Network (SAGIN) | Large-scale natural disasters or public security incidents frequently cause substantial damage to both human life and property, as well as terrestrial communication infrastructure. As a result, this disruption often cuts off communication, leaving the victims isolated from the outside world. Timely completion of Search and Rescue (SAR) operations within the first 72 hours following a disaster is of critical importance, as it can significantly protect human lives and reduce property damage. Note that conducting SAR operations in post-disaster areas requires not only communication support but also computing support. In light of this, it is particularly important to rapidly establish an emergency communication system with computing resources, which offers high reliability, low latency, and high capacity. Such a system is crucial for reducing the threat posed by disasters to human lives. Given the challenges in rapidly restoring terrestrial networks, flexible aerial networks and existing satellite networks emerge as optimal candidates for emergency communications. Meanwhile, the integration of aerial platforms, such as High Altitude Platforms (HAPs) and Low Altitude Platforms (LAPs), can effectively reduce the transmission latency associated with satellite networks and alleviate capacity constraints in terrestrial emergency communication networks. The Space-Air-Ground Integrated Network (SAGIN)-based emergency communication system can utilize the advantages of each segment, including the extensive coverage provided by the space network, the flexibility of the air network, and the high transmission data rates and low latency of the ground network. Consequently, this represents an exemplary paradigm for supporting SAR operations in the future. In this paper, we provide a comprehensive survey of SAGIN-based emergency communication systems, identify key challenges, and discuss promising technologies. Furthermore, future research directions are outlined from multiple perspectives. | 10.1109/TNSM.2025.3580965 |
Sergi Alcalà-Marín, Dario Bega, Marco Gramaglia, Albert Banchs, Xavier Costa-Perez, Marco Fiore | AZTEC+: Long and Short Term Resource Provisioning for Zero-Touch Network Management | 2025 | Early Access | Costs Resource management Biological system modeling Virtual machines Network slicing Forecasting Cloud computing Autonomous networks Training Software defined networking Mobile networks Slicing Resource Provisioning Zero-Touch Management Deep Learning Traffic prediction | In the past few years, network infrastructures have transitioned from prominently hardware-based models to networks of functions, where software components provide the required functionalities with unprecedented scalability and flexibility. However, this new vision entails a completely new set of problems related to resource provisioning and the network function operation, making it difficult to manage the network function lifecycle management with traditional, human-in-the-loop approaches. Novel zero-touch management solutions promise autonomous network operation with limited human interactions. However, modeling network function behavior into compelling variables and algorithm is an aspect that such solutions must take into account. In this paper, we propose AZTEC+, a data-driven solution for anticipatory resource provisioning in network slicing scenarios. By leveraging a hybrid and modular deep learning architecture, AZTEC+ not only forecasts the future demands for target services but also identifies the best trade-offs to balance the costs due to the instantiation and reconfiguration of such resources. Our experimental evaluation, based on real-world network data, shows how AZTEC+ can outperform state-of-the-art management solutions for a large set of metrics. | 10.1109/TNSM.2025.3580706 |
Xiaowei Zhao, Mingshu He, Xiaojuan Wang | Semi-Supervised Learning with Interpolation and Pseudo-Labeling for Few-Label Intrusion Detection | 2025 | Early Access | Interpolation Training Intrusion detection Data models Telecommunication traffic Predictive models Feature extraction Hands Generative adversarial networks Encryption Semi-supervised learning intrusion detection consistency regularization pseudo-labeling few labels | Given the scarcity of labels in network traffic data, traditional supervised learning methods are limited by their dependence on large amounts of labeled data. While semi-supervised learning (SeSL) offers potential solutions, existing SeSL-based intrusion detection systems (IDS) still require substantial labeled samples for effective training, severely constraining their adaptability to emerging cyber threats under extreme label scarcity scenarios (e.g., 3-5 labels per class). This paper proposes IPL-SeSL, a novel SeSL framework that synergistically integrates Interpolation and Pseudo-Labeling mechanisms to enhance IDS’s performance under severe label constraints. IPL-SeSL consists of a supervised branch, a pseudo-labeling branch, and an interpolation branch. In the pseudo-labeling branch, we propose a data augmentation method specifically designed for network traffic data, which enhances the model’s robustness and generalization ability. The interpolation mechanism introduces a novel sample generation strategy that reinforces decision boundaries through geometrically meaningful feature space transformations. Comprehensive evaluations on the CICIoMT2024 benchmark demonstrate the framework’s exceptional performance, achieving 91% detection accuracy with merely 5 labeled instances per class. | 10.1109/TNSM.2025.3580740 |
Dan Tang, Chenguang Zuo, Jiliang Zhang, Keqin Li, Qiuwei Yang, Zheng Qin | MARS: Defending TCP Protocol Abuses in Programmable Data Plane | 2025 | Early Access | Protocols Prevention and mitigation Denial-of-service attack Monitoring Training Switches Receivers Programming Computer languages Bandwidth attack mitigation TCP protocol abuse machine learning heuristic rule programmable data plane | The TCP protocol’s inherent lack of built-in security mechanisms has rendered it susceptible to various network attacks. Conventional defense approaches face dual challenges: insufficient line-rate processing capacity and impractical online deployment requirements. The emergence of P4-based programmable data planes now enables line-speed traffic processing at the hardware level, creating new opportunities for protocol protection. In this context, we present MARS -a data plane-native TCP abuse detection and mitigation system that synergistically combines the Beaucoup traffic monitoring algorithm with artificial neural network (ANN) based anomaly detection, enhanced by adaptive heuristic mitigation rules. Through comprehensive benchmarking against existing TCP defense mechanisms, our solution demonstrates 12.95% higher throughput maintenance and 25.93% improved congestion window recovery ratio during attack scenarios. Furthermore, the proposed framework establishes several novel evaluation metrics specifically for TCP protocol protection systems. | 10.1109/TNSM.2025.3580467 |
Qiangqiang Shi, Jin Liu, Lai Wei, Jiajia Jiao, Bing Han, Zhongdai Wu | SegCoT: Dependable Intrusion Detection System based on Segment-wise CoTransformer for Ship Communication Networks | 2025 | Early Access | Marine vehicles Feature extraction Intrusion detection Accuracy Artificial intelligence Security Maritime communications Data mining Training Threat modeling Intrusion detection deep learning intrusion detection system ship communication networks | Modern vessels integrate a massive digital infrastructure and navigation-dependent operating systems, allowing for ship-to-shore and ship-to-ship collaborative communication. However, the heightened interconnection of various maritime infrastructures inevitably amplifies the risk of vessel navigation and communication. Existing intrusion detection techniques were usually built on individual network events, failing to account for the multi-event long-term dependency problem caused by the high latency and low bandwidth of ship communication networks, therefore cannot tackle sophisticated cyber-ship attacks, resulting in lower accuracy in intrusion detection. In this paper, we propose a dependable Intrusion Detection System(IDS) based on Segment-wise CoTransformer(SegCoT) to detect cyber-ship intrusion events, which primarily contains a two-stage Network Pattern Extraction Component (NPEC) and an Intrusion Event Identification Component (IEIC). The NPEC automates the extraction of long-term dependency of massive intrusion events employing a SegEvent-wise Attention (SEA). Furthermore, the extracted dependencies are leveraged by the IEIC for specific intrusion type detection from a spatio-temporal feature fusion perspective. Based on a cyber-ship dataset collected from real ocean-going vessels, the proposed model achieves 99% intrusion detection accuracy, outperforming the existing state-of-the-art approaches. | 10.1109/TNSM.2025.3580471 |
Kiymet Kaya, Elif Ak, Eren Ozaltun, Leandros Maglaras, Trung Q. Duong, Berk Canberk, Sule Gunduz Oguducu | Black Hole Prediction in Backbone Networks: A Comprehensive and Type-Independent Forecasting Model | 2025 | Early Access | Telecommunication traffic Forecasting Time series analysis Predictive models Training Mobile computing Data models Routing protocols Electronic mail Computational modeling black hole anomaly forecasting backbone networks convolutional autoencoder unsupervised learning multi-head self-attention | Network backbone black holes(BH) pose significant challenges in the Internet by causing disruptions and data loss as routers silently drop packets without notification. These silent BH failures, stemming from issues like hardware malfunctions or misconfigurations, uniquely affect point-to-point packet flows without disrupting the entire network. Unlike cyber attacks and network intrusions, BHs are often untraceable, making early detection vital and challenging. This study addresses the need for an effective forecasting solution for BH occurrences, especially in environments with unlabeled traffic data where traditional anomaly detection methods fall short. The Type-Independent Black Hole Forecasting Model is introduced to predict BH occurrences with high precision across various anomalies, including contextual and collective anomaly types. The three-stage methodology processes unlabeled time-series network data, where the data is not pre-labeled as anomaly or normal, using machine learning and deep learning techniques to identify and forecast potential BH occurrences. The ’Point BH Identification and Segregation’ stage segregates point BH traffic using Density-Based Spatial Clustering of Applications with Noise(DBSCAN), followed by Reintegration and Time Series Smoothing. The final stage, Advanced Contextual and Collective BH Detection leverages Convolutional AutoEncoder(Conv-AE) with window sliding for advanced anomaly detection. Evaluation using a dual-dataset approach, including real backbone network traffic and a time-series adapted public dataset, demonstrates the adaptability of the model to real backbone BH detection systems. Experimental results show superior performance compared to state-of-the-art unsupervised anomaly forecasting models, with a 98% detection rate and 90% F-1 score, outperforming models like MultiHeadSelfAttention, which is the main building block of Transformers. | 10.1109/TNSM.2025.3581557 |
Jiali Zheng, Jiawen Li | MGRS-PBFT: An Optimized Consensus Algorithm Based on Multi-Group Ring Signatures for Blockchain Privacy Protection | 2025 | Early Access | Blockchains Privacy Protection Consensus algorithm Security Cryptography Fault tolerant systems Fault tolerance Scalability Public key blockchain consensus protocol ring signature practical Byzantine fault tolerance (PBFT) privacy protection | In the realm of blockchain systems, the prominence of privacy and security issues is steadily increasing, thereby necessitating greater focus on privacy protection technologies. The conventional practical Byzantine fault tolerance (PBFT) consensus algorithm is characterized by limited scalability and an absence of privacy protection. Consequently, this study introduces ring signature privacy protection technology and proposes an enhanced PBFT consensus algorithm, named MGRS-PBFT, based on multi-group ring signatures, aimed at bolstering blockchain privacy protection. Firstly, a credit score mechanism is introduced to incentivize or penalize nodes’ behavior and assess their performance. The selection of the primary node is determined through a voting process utilizing the nodes’ credit scores, thereby mitigating the influence of malicious nodes within the system and enhancing overall system security. Secondly, nodes are stratified into multiple groups based on inter-node response speed, thus streamlining the consensus protocol and diminishing system communication complexity. Finally, an identity-based ring signature algorithm is implemented to protect the privacy data of nodes. Experimental results demonstrate that the average consensus delay of MGRS-PBFT is reduced by 46.73% compared to PBFT, 20.87% compared to double-layer PBFT, 18.61% compared to SG-PBFT, and 27.58% compared to CRBFT. Additionally, MGRS-PBFT achieves an average throughput that is 2.44 times that of PBFT, 1.27 times that of double-layer PBFT, 1.23 times that of SG-PBFT, and 1.45 times that of CRBFT. Through security experiments, the analysis demonstrates that MGRS-PBFT outperforms PBFT in terms of its resistance to Byzantine nodes and fault tolerance. | 10.1109/TNSM.2025.3580403 |
Zhichao Zhang, Yanan Cheng, Zhaoxin Zhang, Xinran Liu, Ning Li | 6Hound: An Efficient IPv6 DNS Resolver Discovery Model Based on Reinforcement Learning | 2025 | Early Access | Domain Name System Internet Heuristic algorithms Protocols Deep learning 6G mobile communication Training Resource management Reinforcement learning Probes IPv6 scanning DNS resolver Internet-wide scanning reinforcement learning target generation | DNS resolvers are an important measurement targets in the IPv4/IPv6 Internet for cybersecurity and network management. However, due to the vast address space of IPv6, it is infeasible to discover IPv6 DNS resolvers using brute-force Internet-wide scanning as in IPv4. To address this issue, researchers have developed target generation algorithms (TGAs) to discover active targets in the IPv6 address space. However, most TGAs utilize ICMP as the probing protocol, and depend on large, high-quality ICMP seed address datasets. When the same TGA methods are applied to the UDP/53 protocol, which has a limited number of seed addresses, the efficiency of discovering DNS resolvers is low. To solve this issue, we developed 6Hound to efficiently discover DNS resolvers in the IPv6 Internet. To mitigate the scarcity of UDP/53 seed addresses, we proposed the Pattern-merged Tree, which strategically expands the scanning space by utilizing ICMP seed addresses. To efficiently discover de-aliased active addresses within these merged patterns, we proposed a hierarchical multi-armed bandit to control the distribution of probe packets. We introduced the Sliced Address Generation algorithm and a dynamic alias detection mechanism to enhance the hit rate of each detection round and avoid the misleading effects of aliased addresses. In the experiments conducted in the native IPv6 Internet, we discovered about a million de-aliased active DNS resolver addresses under a budget scale of 50M, which is 110% to 465% higher than the state-of-the-art baseline methods. | 10.1109/TNSM.2025.3580281 |
Kashif Mehmood, Katina Kralevska, David Palma | Knowledge-Driven Intent Life-Cycle Management for Cellular Networks | 2025 | Early Access | Knowledge graphs Translation Cellular networks Stakeholders Optimization Ontologies 5G mobile communication Resource description framework Monitoring Quality of service IBN closed-loop control service model knowledge graph learning service and network management optimization | The management of cellular networks and services has evolved due to the rapidly changing demands and complexity of service modeling and management. This paper uses intent-based networking (IBN) as a solution and couples it with contextual information from knowledge graphs (KGs) of network and service components to achieve the objective of service orchestration in cellular networks. Fusing IBN with KGs facilitates an intelligent, flexible, and resilient service orchestration process.We propose an intent completion approach using knowledge graph learning and a mapping model capable of inferring and validating the service intents in the network. Subsequently, these service intents are deployed using available network resources in a simulated fifth generation (5G) non-standalone (NSA) network. The compliance of the deployed intents is monitored, and mutual optimization against their required service key performance indicators is performed using Simultaneous Perturbation Stochastic Approximation (SPSA) and Multiple Gradient Descent Algorithm (MGDA). The numerical results show that the knowledge graph with Gaussian embedding (KG2E) model outperforms other distance-based embedding models for the proposed service KG. Different combinations of strict latency (SL) and non-strict latency (NSL) intents are deployed, and compliance is evaluated for increasing numbers of deployed intents against baseline deployment scenarios. The results show a higher level of compliance for SL intents to target latencies in comparison to NSL intents for the proposed intent deployment and optimization algorithm. | 10.1109/TNSM.2025.3579547 |
Chi Guo, Cong Wang, Qiuzhan Zhou, Juan Li | A Bi-level Scheme for Mixed-Motive and Energy-Efficient Task Offloading in Vehicular Edge Computing Systems | 2025 | Early Access | Servers Optimization Games Mobile handsets Computational efficiency Energy consumption Computational modeling Performance evaluation Resource management Energy efficiency Vehicular Edge Computing Bi-level Optimization Task Offloading Stackelberg Game Multi-Agent Actor-Critic | Edge computing is considered as a promising paradigm to support vehicular applications in the upcoming sixth-generation (6G) vehicular networks. In the context of vehicular edge computing (VEC), the self-interested vehicular users and edge servers work towards incongruous goals. Such mixed-motive setting is detrimental to the collective good, sometimes leading to social dilemmas. To resolve such a conflict, we first formulate a bi-level optimization problem to model mixed-motive task offloading. In this case, vehicular users aim to improve energy efficiency under strict low-latency requirements, whereas edge servers attempt to increase serving efficiency. To address it, we propose a scheme based on bi-level reinforcement learning, i.e., bi-level multi-agent actor-critic (BLMAAC) framework. Specifically, upper-level edge servers make iterative optimization under the best responses of lower-level vehicular users, which can be regarded as a Stackelberg game. Theoretically, we identify the conditions and prove the convergence of the framework that is able to reach Stackelberg equilibrium strategy. By numerical evaluation, the high-utilization edge servers and energy-efficient vehicular users demonstrate the superiority of the bi-level structure. Moreover, the proposed scheme outperforms other actor-critic based learning algorithms and two-stage methods exploring Nash equilibrium strategy. | 10.1109/TNSM.2025.3579598 |
Wei-Kuo Chiang, Ting-Yu Wang, Yun-Fan Huang, Kun-Ting Liao | A Quantitative Approach to Optimize 5GC Refactoring for Minimum Signaling Latency and Resource Allocation | 2025 | Early Access | Heuristic algorithms 5G mobile communication Microservice architectures Delays Multiuser detection Merging Resource management Clustering algorithms Optimization Computer architecture Network function virtualization 5G core network (5GC) refactoring merging string matching algorithm queuing delay | This article proposes a quantitative approach to optimizing the 5G core (5GC) network refactoring as an example. Our previous study formulated the refactoring problem to minimize queuing delay and resource allocation cost directly in the M/M/k model and utilized the optimization tool, GUROBI, to derive an optimal refactored 5GC architecture, abbreviated as GUR-5GC. However, it is time-consuming; this approach for refactoring optimization is not feasible for dynamic scaling design. We design two quantitative indicators, message exchange reduction (MER) and merging utilization degradation (MUD), to evaluate the impacts of merging certain network functions. Moreover, the problem of calculating the two quantitative indicators can be reduced to a string-matching problem. Then, we reconstructed the optimization model formulation by using MER and MUD indicators in the objective functions instead of the queuing delay and resource allocation cost, since the optimization tool (CPLEX) Mathematical Programming (MP) and Constraint Programming (CP) models could not solve the original problem. Then, we utilized the CPLEX MP Model optimizer integrated with Pareto optimality to derive the CPLEX-based Refactored 5GC (CPR-5GC). In addition, we use a CURE-based (clustering) algorithm with MER and MUD by performing the quantitative analysis to derive the CURE-based Refactored 5GC (CUR-5GC) architecture. Finally, we analyzed the performance of the 5GC, GUR-5GC, CPR-5GC, and CUR-5GC. Moreover, we evaluate them in terms of queuing delay and scaling side effects. The performance results show that CPR-5GC and CUR-5GC outperform the original 5GC and are close to the GUR-5GC; the two heuristic algorithms for 5GC refactoring are feasible and practical. | 10.1109/TNSM.2025.3602492 |
José Santos, Bibin V. Ninan, Bruno Volckaert, Filip De Turck, Mays Al-Naday | A Comprehensive Benchmark of Flannel CNI in SDN/non-SDN Enabled Cloud-Native Environments | 2025 | Early Access | Containers Benchmark testing IP networks Microservice architectures Encapsulation Complexity theory Software defined networking Packet loss Overlay networks Network interfaces Containers Container Network Interfaces Network Function Virtualization Benchmark Cloud-Native Software-Defined Networking | The emergence of cloud computing has driven advancements in software virtualization, particularly microservice containerization. This in turn led to the development of Container Network Interfaces (CNIs) such as Flannel to connect microservices over a network. Despite their objective to provide connectivity, CNIs have not been adequately benchmarked when containers are connected over an external network. This creates uncertainty about the operation reliability of CNIs in distributed edge-cloud ecosystems. Given the multitude of available CNIs and the complexity of comparing different ones, this paper focuses on the widely adopted CNI, Flannel. It proposes the design of novel benchmarks of Flannel across external networks, Software Defined Networking (SDN)-based and non-SDN, characterizing two of the key backend types of Flannel: User Datagram Protocol (UDP) and Virtual Extensible LAN (VXLAN). Unlike existing benchmarks, this study analysis the overhead introduced by the external network and the impact of network disruptions. The paper outlines the systematic approach to benchmarking a set of Key Performance Indicators (KPIs), including: speed, latency and throughput. A variety of network disruptions have been induced to analyse their impact on these KPIs, including: delay, packet loss, and packet corruption. The results show that VXLAN consistently outperforms UDP, offering superior bandwidth with efficient resource consumption, making it more suitable for production environments. In contrast, the UDP backend is suitable for real-time video streaming applications due to its higher data rate and lower jitter, though it requires higher resource utilization. Moreover, the results show less variation in KPIs over SDN, compared to non-SDN. The benchmark data are made publicly available in an open-source repository, enabling researchers to replicate the experiments, and potentially extend the study to other CNIs. This work contributes to the network management domain by providing an extensive benchmark study on container networking highlighting the main advantages and disadvantages of current technologies. | 10.1109/TNSM.2025.3602607 |
Antonio Calagna, Yenchia Yu, Paolo Giaccone, Carla Fabiana Chiasserini | MOSE: A Novel Orchestration Framework for Stateful Microservice Migration at the Edge | 2025 | Early Access | Containers Microservice architectures Protocols Image restoration Quality of experience Autonomous aerial vehicles Kernel Iterative methods Autopilot Source coding Edge computing Service migration Mobile networks Computer vision Machine learning | Stateful migration has emerged as the dominant technology to support microservice mobility at the network edge while ensuring a satisfying experience to mobile end users. This work addresses two pivotal challenges, namely, the implementation and the orchestration of the migration process. We first introduce a novel framework that efficiently implements stateful migration and effectively orchestrates the migration process by fulfilling both network and application KPI targets. Through experimental validation using realistic microservices, we then show that our solution (i) greatly improves migration performance, yielding up to 77% decrease of the migration downtime with respect to the state of the art, and (ii) successfully addresses the strict user QoE requirements of critical scenarios featuring latency-sensitive microservices. Further, we consider two practical use cases, featuring, respectively, a UAV autopilot microservice and a multi-object tracking task, and demonstrate how our framework outperforms current state-of-the-art approaches in configuring the migration process and in meeting KPI targets. | 10.1109/TNSM.2025.3579051 |
Yuhao Hou, Jiazheng Zou, Licheng Wang, Xijie Lu, Xiuhua Lu, Maoli Wang | PRBCP: Publicly Redactable Blockchain with Off-Chain Reputation-Based Consensus Protocol | 2025 | Early Access | Blockchains Decision making Hash functions Consensus protocol Evaluation models Technological innovation Industrial Internet of Things Electronic medical records Training Symbols Redactable blockchain reputation-based consensus dynamic group incentive electronic medical records | Blockchain is renowned for its immutability, a feature that ensures recorded data cannot be modified or deleted. However, malicious entities can exploit this immutability to permanently embed objectionable data. Moreover, the immutability of blockchain can conflict with the “right to be forgotten” provision in the privacy protection laws of the GDPR. Therefore, a seminal redactable blockchain solution is proposed to address the above problem. Recently, one of the primary focuses in redactable blockchain solutions is the design of global editing permission control. The design leverages the consensus voting mechanism, with its core objective being to prevent excessive centralization of editing power, thereby preserving the decentralized nature of blockchain technology. Meanwhile, such schemes exhibit the following phenomena: (i) the members of the editorial decision-making group are fixed and unchanging, and (ii) blockchain nodes display lazy voting behaviors during the editorial voting process. Considering these factors, we introduce a Publicly Redactable Blockchain scheme with an off-chain reputation-based Consensus Protocol (PRBCP). In this scheme, any node in the blockchain network has the opportunity to become an editorial node and perform editing operations. We design a reputation-based off-chain editorial voting consensus protocol leveraging the threshold signature scheme, which enables dynamic updates to the editorial decision-making group membership and enhances nodes’ participation in editorial voting. In addition, we conduct rigorous security proofs and experimental efficiency analyses for our scheme. The results demonstrate that the PRBCP is both secure and efficient. Finally, we instantiate our scheme as a redactable medical blockchain (RMB) system for storing electronic medical records (EMRs). | 10.1109/TNSM.2025.3578659 |
Marwan Dhuheir, Aiman Erbad, Ala Al-Fuqaha, Bechir Hamdaoui, Mohsen Guizani | AoI-Aware Intelligent Platform for Energy and Rate Management in Multi-UAV Multi-RIS System | 2025 | Early Access | Autonomous aerial vehicles Internet of Things Optimization Reconfigurable intelligent surfaces Energy consumption Path planning Power system dynamics Heuristic algorithms Energy efficiency Data collection energy harvesting age of information (AoI) multi-UAV path planning RIS PSO reinforcement learning | Recently, unmanned aerial vehicles (UAVs) have demonstrated exemplary performance in various scenarios, such as search and rescue, smart city services, and disaster response applications. UAVs can facilitate wireless power transfer (WPT), resource offloading, and data collection from ground IoT devices. However, employing UAVs for such applications poses several challenges, including limited flight duration, constrained energy resources, and the age of information of the data collected. To address these challenges, we employ a UAV swarm to maximize energy harvesting (EH) and data rates for IoT devices by optimizing UAV paths and integrating reconfigurable intelligent surfaces (RIS) technology. We tackle critical constraints, including UAV energy consumption, flight duration, and data collection deadlines, by formulating an optimization problem to find optimal UAV paths and RIS phase shifts. Given the complexity of the problem, its combinatorial nature, and the challenges of obtaining an optimal solution through conventional optimization methods, we decompose the problem into two sub-problems, employing deep reinforcement learning (DRL) to optimize EH and particle swarm optimization (PSO) to optimize RIS phase shifts. Our extensive simulations show that the proposed solution outperforms competitive algorithms, including Brute-Force-PSO, AC-PSO, and PPO-PSO algorithms, providing a robust solution for modern IoT applications. | 10.1109/TNSM.2025.3584883 |
Seyed Salar Sefati, Bahman Arasteh, Simona Halunga, Octavian Fratu | Adaptive Service Recommendation in Internet of Things Using a Reinforcement Learning and Optimization Algorithm | 2025 | Early Access | Internet of Things Real-time systems Optimization Filtering Accuracy Reinforcement learning Recommender systems Telecommunications Technological innovation Social Internet of Things Black widow optimization (BWO) algorithm (BWO) Internet of Things Reinforcement learning Service recommendations | A recent technology trend known as the Internet of Things (IoT) involves using devices like smartphones, smart TVs, medical and healthcare equipment, and home appliances to generate data. This paper introduces a novel framework, Reinforcement Learning with Black Widow Optimization (RL-BWO), to enhance IoT service recommendations through responsiveness to evolving service requests and optimized resource usage. Unlike prior hybrid approaches that rely on static recommendation strategies or single-pass learning, RL-BWO uniquely integrates incremental Reinforcement Learning (RL) with evolutionary optimization, enabling continuous policy refinement in dynamic environments. The framework features a multi-batch data partitioning mechanism, and a service-request interactive simulator based on Markov Decision Processes (MDP) to support real-time adaptation. The Black Widow Optimization (BWO) algorithm is used to fine-tune service selection through fitness-based ranking, ensuring high-quality recommendations under resource constraints. Experimental results in a smart city simulation show that RL-BWO improves the solved request rate by up to 12.8%, reduces latency by 17%, and enhances reliability by 9.6% compared to leading methods such as Genetic Algorithm–Simulated Annealing–Particle Swarm Optimization (GASAPSO), Time Correlation Coefficient with Cuckoo Search–K-means (TCCF), and Artificial Bee Colony with Genetic Algorithm (ABCGA). These results demonstrate RL-BWO’s superior scalability, accuracy, and responsiveness, making it a robust solution for large-scale, real-time IoT service recommendation. | 10.1109/TNSM.2025.3585995 |