Last updated: 2025-12-12 05:01 UTC
All documents
Number of pages: 152
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Yan Dong, Bin Cao, Zhiyu Wang, Menglan Hu, Chao Cai, Tianyue Zheng, Kai Peng | A Joint Game-Theoretic Approach for Multicast Routing and Load Balancing in LEO Satellite Networks | 2025 | Early Access | Satellites Low earth orbit satellites Multicast algorithms Routing Heuristic algorithms Videos Trees (botanical) Optimization Bandwidth Steiner trees Low Earth Orbit satellites Software Defined Multicast Multicast Routing Game Theory Nash Equilibrium | Low Earth Orbit (LEO) satellite networks, with their low latency, high bandwidth, and global coverage, are becoming key technologies for applications like real-time video transmission. As satellite networks expand, effectively managing multicast traffic and optimizing bandwidth utilization have become major challenges for efficient video distribution. Although Software-Defined Multicast (SDM) technology has made progress in bandwidth optimization, existing SDM methods are still focused on constructing Steiner trees, making it difficult to address the dynamic changes and high-load issues in LEO satellite networks. This paper frames the multicast tree construction problem as a Joint Path Optimization Game (JPOG). We propose a Cooperative Game-Theoretic Routing (CGMR) Algorithm based on game theory, which optimizes multicast path selection and achieves load balancing by introducing a link cost-sharing mechanism. Additionally, we propose a two-stage A* path generation algorithm to improve path search efficiency. Theoretically, this paper proves that JPOG is a potential game and can converge to a pure strategy Nash equilibrium (PSNE) within a finite number of iterations. The results showed that JPOG outperformed other algorithms, achieving lower link load, path cost, and superior load balancing, demonstrating its effectiveness in optimizing multicast routing and resource management in large-scale LEO satellite networks. | 10.1109/TNSM.2025.3632925 |
| Remi Hendriks, Mattijs Jonker, Roland van Rijswijk-Deij, Raffaele Sommese | Load-Balancing Versus Anycast: A First Look at Operational Challenges | 2025 | Early Access | Routing Internet Routing protocols Probes IP networks Costs Tunneling Time measurement Source address validation Servers Anycast Load Balancing Routing Stability | Load Balancing (LB) is a routing strategy that increases performance by distributing traffic over multiple outgoing paths. In this work, we introduce a novel methodology to detect the influence of LB on anycast routing, which can be used by operators to detect networks that experience anycast site flipping, where traffic from a single client reaches multiple anycast sites. We use our methodology to measure the effects of LB-behavior on anycast routing at a global scale, covering both IPv4 and IPv6. Our results show that LB-induced anycast site flipping is widespread. The results also show our method can detect LB implementations on the global Internet, including detection and classification of Points-of-Presence (PoP) and egress selection techniques deployed by hypergiants, cloud providers, and network operators. We observe LB-induced site flipping directs distinct flows to different anycast sites with significant latency inflation. In cases with two paths between an anycast instance and a load-balanced destination, we observe an average RTT difference of 30 ms with 8% of load-balanced destinations seeing RTT differences of over 100 ms. Being able to detect these cases can help anycast operators significantly improve their service for affected clients. | 10.1109/TNSM.2025.3636785 |
| Yunwu Wang, Ruoyu Li, Min Zhu, Jiahua Gu, Yuancheng Cai, Jiao Zhang, Yongming Huang | Dynamic End-to-End Optical-Wireless Network Slicing Mapping Based on Deep Reinforcement Learning | 2025 | Early Access | Optical fiber networks Network slicing Wireless networks Integrated optics Resource management Simulation Heuristic algorithms Computer architecture Microprocessors Decision making End-to-end optical-wireless network slicing radio access network (RAN) slicing deep reinforcement learning converged optical-wireless access networks | Network slicing has emerged as a promising solution for end-to-end (E2E) resource management and orchestration, enabled by software-defined networking (SDN) and network function virtualization (NFV) technologies. In this paper, we investigate the dynamic E2E optical-wireless network slicing mapping problem in converged optical-wireless access networks. To address user data rate requirements in wireless networks and radio access network (RAN) slicing scheduling in optical networks, we first formulate an E2E optical-wireless network slicing mapping model with its associated constraints. Subsequently, to provide feasible solutions for real-world applications, we propose a dynamic E2E optical-wireless network slicing mapping (D-E2E-OW-NSM) algorithm based on deep reinforcement learning (DRL). To facilitate the decision-making process of the DRL agent, we decompose the intricate E2E optical-wireless network slicing request into several sub-requests, solving them one by one in turn. Simulation results demonstrate that our proposed method reduces the request blocking probability by up to 18.2% in a small-scale network and 11.3% in a large-scale network compared to baseline methods. Our analyses provide valuable insights into the modeling and design of efficient converged optical-wireless access networks for 5G and beyond. | 10.1109/TNSM.2025.3636525 |
| Yanshang Yin, Tiantian Zhu, Tieming Chen, Mingqi Lv | MADGuard: A High-Performance Microservice Anomaly Detection System With Multidimensional Data Fusion and Temporal Causal Analysis | 2025 | Early Access | Anomaly detection Microservice architectures Forensics Feature extraction Accuracy Java Security Databases Data models Data integration Microservices Security Provenance Graph Temporal Graph Network Anomaly Forensics Anomaly Detection | With the widespread adoption of microservice architectures, the security threats they face have become increasingly sophisticated. Existing anomaly detection methods based on system calls exhibit significant limitations in three key aspects: multidimensional data fusion, temporal causality modeling, and forensic analysis of anomalies. This paper proposes MADGuard, a provenance graph-based anomaly detection system for microservices. MADGuard addresses these challenges through three key innovations: (1) It constructs a native provenance graph by integrating multisource services and multidimensional data, employing feature hashing and positional encoding for efficient graph representation; (2) The system introduces a Temporal Graph Network (TGN) model combined with edge reconstruction error and Inverse Document Frequency (IDF) weighting, achieving a 15. 07% improvement in the F1 score compared to existing methods; (3) For the first time in microservice security, an integrated forensic analysis module is implemented, allowing rapid anomaly path reconstruction through aggregated anomaly subgraphs. Comprehensive evaluations on typical microservice benchmarks (TeaStore, RobotShop, SockShop) demonstrate MADGuard’s superior performance: 94.08% detection accuracy, significantly outperforming state-of-the-art approaches while maintaining practical operational efficiency. | 10.1109/TNSM.2025.3634590 |
| Seokwon Kang, Md. Shirajum Munir, Choong Seon Hong | Intelligent Supply Chain for Communication Navigation and Caching in Multi-UAV Wireless Network | 2025 | Early Access | Autonomous aerial vehicles Resource management Wireless networks Decision making Collaboration Base stations Real-time systems Three-dimensional displays Prevention and mitigation Navigation Multi-agent deep reinforcement learning joint optimization resource allocation caching prediction | The emergence of unmanned aerial vehicles (UAVs) in wireless communication has opened up new prospects for improving network performance and user experience. This study aims to create a smart supply chain that enables collaborative communication, navigation, and caching in multi-UAV wireless networks. To achieve this, a user-UAV clustering strategy using the Whale Optimization Algorithm (WOA) optimizes resource distribution and network management. Additionally, Multi-Agent Deep Deterministic Policy Gradients (MADDPG) jointly optimize UAV trajectory planning, bandwidth allocation, and caching decisions, enabling UAVs to enhance trajectory, allocate resources, and manage cached content efficiently. Experimental results show a 10% reduction in average response time and a 12% decrease in system energy consumption, demonstrating the efficiency of the proposed approach in improving data delivery and extending UAV lifespan for sustainable network operations. | 10.1109/TNSM.2025.3629802 |
| Samayveer Singh, Vikas Tyagi, Aruna Malik, Rajeev Kumar, Ankur, Neeraj Kumar | Intelligent Energy-Aware Routing via Protozoa Behavior in IoT-Enabled WSNs | 2025 | Early Access | Protocols Wireless sensor networks Routing Energy consumption Optimization Genetic algorithms Energy efficiency Relays Clustering algorithms Heuristic algorithms Wireless Sensor Networks (WSNs) Cluster Head Selection Bio-Inspired Optimization Heterogeneous IoT Networks Energy Consumption | Energy efficiency and minimization of redundant transmissions are critical challenges in Wireless Sensor Networks (WSNs), especially in heterogeneous IoT environments where sensor nodes (SNs) are resource-constrained and deployed in remote or inaccessible areas. This paper aims to address the dual problem of uneven energy distribution and limited network lifespan by proposing a novel Artificial Protozoa Optimizer-based Cluster Head Selection (APO-CHS) algorithm. The proposed APO-CHS is inspired by the adaptive behavior of Euglena, integrating foraging, dormancy, and reproduction mechanisms to optimize cluster head and relay node selection through a multi-objective fitness function. The function incorporates residual energy, node density, neighbor distance, and energy consumption rate to guide the selection process effectively. Additionally, to tackle communication inefficiency, a lightweight data aggregation scheme is employed. This scheme reduces redundant transmissions by introducing a multi-level aggregation model that eliminates full, partial, and duplicate data in both intra-and inter-cluster communication. The simulation results demonstrate that the proposed framework improves network stability by 29.24%, extends network lifetime by 283.96%, and increases throughput by over 60% compared to baseline methods, thus making it a highly efficient and scalable solution for energy-aware IoT-enabled WSN applications. | 10.1109/TNSM.2025.3636202 |
| Naohide Wakuda, Ryuta Shiraki, Eiji Oki | Analysis of Unavailability in Middleboxes With Double-Capacity Multiple Backup Servers Under Shared Protection | 2025 | Early Access | Servers Protection Middleboxes Computational modeling Mathematical models Analytical models Numerical models Load modeling Resource management Cloud computing Analytical model middleboxes unavailability shared protection Markov chain | Middleboxes play a critical role in network operations, providing various network service functions, and can be implemented as software on general-purpose servers through network function virtualization technology. The unavailability of middlebox functions is an essential metric of the overall quality of network services. This paper proposes an analytical model that calculates the unavailability of middlebox functions, where multiple backup servers protect one or more functions, and one backup server can recover at most two functions simultaneously, which we call the double-capacity multiple-backup model. While the single-capacity multiple-backup model, where each backup server can recover at most one function, has been studied, the double-capacity multiple-backup model has not been addressed. The proposed double-capacity multiple-backup model allows for load balancing among backup servers. The proposed model can have two different workload strategies: load-persistent (LP) and load-distributing (LB). We use a Markov chain to analyze state transitions and develop equilibrium-state equations, providing a method to compute function unavailability for each strategy. Numerical results observe that these two strategies have almost the same unavailability. Since the LB strategy incurs additional operational costs compared to the LP strategy, the LP strategy is preferable. We also find that the double-capacity multiple-backup model reduces unavailability by 7.21–81.9% compared to the single-backup model in the examined cases. | 10.1109/TNSM.2025.3636232 |
| Jiachen Liang, Yang Du, He Huang, Yu-E Sun, Guoju Gao, Yonglong Luo | Memory-Efficient and Hardware-Friendly Sketches for Hierarchical Heavy Hitter Detection | 2025 | Early Access | IP networks Memory management Detectors Accuracy High-speed networks Throughput Hardware Electronic mail Telecommunication traffic Periodic structures Sketch Hierarchical heavy hitter Network traffic measurement | Identifying the hierarchical heavy hitters (HHHs), i.e., the frequent aggregated flows based on common IP prefixes, is a vital task in network traffic measurement and security. Existing methods typically employ dynamic trie structures to track numerous prefixes or utilize multiple separate sketch instances, one for each hierarchical level, to capture HHHs across different levels, while both approaches suffer from low memory efficiency and limited compatibility with programmable switches. In this paper, we introduce two novel HHH detection solutions, respectively, Hierarchical Heavy Detector (HHD) and the Compressed Hierarchical Heavy Detector (CHHD), to achieve high memory efficiency and enhanced hardware compatibility. The key idea of HHD is to design a shared bucket array structure to identify and record HHHs from all hierarchical levels, which avoids the memory wastage of maintaining separate sketches to achieve high memory efficiency and allows feasible deployment of both byte-hierarchy and bit-hierarchy HHH detection on programmable switches using minimal processing stage resources. Additionally, HHD utilizes a sampling-based update strategy to effectively balance packet processing speed and detection accuracy. Furthermore, we present the CHHD, which enhances HHH detection in bit hierarchies through a more compact cell structure, which allows for compressing several ancestor and descendant prefixes within a single cell, further boosting memory efficiency and accuracy. We have implemented HHD and CHHD on a P4-based programmable switch with limited switch resources. Experimental results based on real-world Internet traces demonstrate that HHD and CHHD outperform the state-of-the-art by achieving up to 56 percentage points higher detection precision and 2.6× higher throughput. | 10.1109/TNSM.2025.3635692 |
| Ke Ding, Xiaoyan Hu, Weicheng Zhou, Guang Cheng, Ruidong Li, Hua Wu | VeCroToken: An Efficient, Verifiable, and Privacy-Preserving Cross-Chain Model for Consortium Blockchains Based on zk-SNARKs | 2025 | Early Access | Privacy Blockchains Cryptography Relays Protocols Circuits Data privacy Law Fabrics Distributed ledger Consortium blockchain privacy preservation verifiability cross-chain zero-knowledge proof | Consortium blockchains enable secure economic applications through privacy-preserving architectures and efficient processing. Growing cross-chain demands require value-exchange mechanisms, yet expose privacy risks during external interactions. Encrypting cross-chain information is necessary, requiring third-party verification of relations within the encrypted content. Existing privacy-preserving cross-chain research for consortium chains struggles to balance transaction efficiency, transaction validity verification, and complex trust assumptions for relays. We present VeCroToken, an efficient, verifiable, and privacy-preserving cross-chain model for consortium blockchains. VeCroToken introduces a dual-balance mechanism and designs four types of cross-chain zero-knowledge transactions based on zk-SNARKs. These transactions encrypt two types of balances and transaction amounts, effectively protecting participant privacy. The encrypted cross-chain data and zero-knowledge proof credentials are stored on participants’ consortium blockchains and the relay chain. Relay nodes and third parties can validate transaction proofs with public parameters, preserving privacy while ensuring compliance and validity. We give a security analysis in the UC framework that proves verifiability and balance safety. We also provide a privacy analysis establishing the amount, balance, and fund-correlation privacy. We implement a prototype on Hyperledger Fabric. Our experimental results show that VeCroToken has a lower overall zero-knowledge proof overhead than the state-of-the-art models and performs well in transaction performance. | 10.1109/TNSM.2025.3636014 |
| Awaneesh Kumar Yadav, An Braeken | Efficient Privacy-Preserving 5G Authentication and Key Agreement for Applications (5G-AKMA) in Multi-Access Edge Computing | 2025 | Early Access | Protocols Authentication Security Privacy 5G mobile communication Handover Costs Servers Federated identity Elliptic curve cryptography 5G-AKMA Authentication Authorization Privacy Security | The 5G Authentication and Key Management for Applications (AKMA) protocol is a 5G standard proposed by 3GPP in order to standardize the authentication procedure of mobile users towards applications based on the authentication of the user to the mobile network. As pointed out by several authors, the 5G-AKMA protocol inherently poses severe security issues, including privacy, unlinkability, ephemeral secret leakage and stolen device attacks. Also, the protocol does not offer perfect forward secrecy. In addition, the network operator is able to record all applications to which the user is subscribed and any outsider eavesdropping the communication channel is able to link requests to different applications coming from the same user. While the state of the shows that various protocols are proposed to solve the 5G-AKMA security issues, they are either vulnerable to severe attacks or are computationally extensive. In this paper, we provide a new version of the protocol able to solve these privacy issues in an effective manner. In addition, we also extend the protocol such that it can be used for communications in multiaccess edge computing (MEC) applications, taking into account handover procedures from one MEC server to another. The proposed protocol has been thoroughly compared to existing ones, revealing its efficiency in terms of communication, computation storage, and energy costs. The comparative analysis shows that the proposed 5G-AKMA reduces computational cost by 92%, communication cost by 74%, storage cost by 38%, and energy consumption cost by 58%. The security verification has been conducted using informal and formal methods (Real-Or-Random (ROR) and Scyther Validation tools) to ensure the protocol’s security. Additionally, we conduct a comparative analysis under an unknown attack scenario. Furthermore, the simulation is carried out using NS3. | 10.1109/TNSM.2025.3635876 |
| Hoda Sedighi, Fetahi Wuhib, Roch H. Glitho | Dynamic Task Scheduling and Adaptive GPU Resource Allocation in the Cloud | 2025 | Early Access | Graphics processing units Resource management Dynamic scheduling Multitasking Training Cloud computing Processor scheduling Hardware Heuristic algorithms Delays fair resource sharing dynamic resource management task scheduling GPU resource allocation efficiency in cloud computing | The growing demand for computational power in cloud computing has made Graphics Processing Units (GPUs) essential for providing substantial computational capacity. Efficiently allocating GPU resources is crucial due to their high cost. Additionally, it’s necessary to considercloud environment characteristics, such as dynamic workloads, multi-tenancy, and requirements like isolation. One key challenge is efficiently allocating GPU resources while maintaining isolation and adapting to dynamic workload fluctuations. Another challenge is ensuring scheduling maintains fairness between tenants while meeting task requirements (e.g., completion deadlines). While existing approaches have addressed each challenge individually, none have tackled both challenges simultaneously. This is especially important in dynamic environments where applications continuously request and release GPU resources. This paper introduces a new dynamic GPU resource allocation method, incorporating fair and requirement-aware task scheduling. We present a novel algorithm that leverages the multitasking capabilities of GPUs supported by both hardware and software. The algorithm schedules tasks and continuously reassesses resource allocation as new tasks arrive to ensure fairness. Simultaneously, it adjusts allocations to maintain isolation and satisfy task requirements. Experimental results indicate that our proposed algorithm offers several advantages over existing state-of-the-art solutions. It reduces GPU resource usage by 88% and significantly decreases task completion times. | 10.1109/TNSM.2025.3635529 |
| Deqiang Zhou, Xinsheng Ji, Wei You, Hang Qiu, Yu Zhao, Mingyan Xu | Intent-Based Automatic Security Enhancement Method Towards Service Function Chain | 2025 | Early Access | Security Translation Servers Adaptation models Automation Virtual private networks Firewalls (computing) Quality of service Network security Network function virtualization SFC security intent automatic security enhancement network security function diverse requirements | The reliance on Network Function Virtualization (NFV) and Software-Defined Network (SDN) introduces a wide variety of security risks in Service Function Chain (SFC), necessitating the implementation of automated security measures to safeguard ongoing service delivery. To address the security risks faced by online SFCs and the shortcomings of traditional manual configuration, we introduce Intent-Based Networking (IBN) for the first time to propose an automatic security enhancement method through embedding Network Security Functions (NSFs). However, the diverse security requirements and performance requirements of SFCs pose significant challenges to the translation from intents to NSF embedding schemes, which manifest in two main aspects. In the logical orchestration stage, NSF composition consisting of NSF sets and their logical embedding locations will significantly impact the security effect. So security intent language model, a formalized method, is proposed to express the security intents. Additionally, NSF Embedding Model Generation Algorithm (EMGA) is designed to determine NSF composition by utilizing NSF capability label model and NSF collaboration model, where NSF composition can be further formulated as NSF embedding model. In the physical embedding stage, the differentiated service requirements among SFCs result in NSF embedded model obtained by EMGA being a multi-objective optimization problem with variable objectives. Therefore, Adaptive Security-aware Embedding Algorithm (ASEA) featuring adaptive link weight mapping mechanism is proposed to solve the optimal NSF embedding schemes. This enables the automatic translation of security intents into NSF embedding schemes, ensuring that both security requirements are met and service performance is guaranteed. We develop the system instance to verify the feasibility of intent translation solution, and massive evaluations demonstrate that ASEA algorithm has better performance compared with the existing works in the diverse requirement scenarios. | 10.1109/TNSM.2025.3635228 |
| Giovanni Pettorru, Marco Martalò | A Persistent and Secure Publish-Subscriber Architecture for Low-Latency IoT Communications | 2025 | Early Access | Internet of Things Protocols Low latency communication Security HTTP Servers Telemetry TCP Standards Logic gates Internet of Things (IoT) security low latency computational complexity QUIC WebSocket (WS) Message Queuing Telemetry Transport (MQTT) | Secure and low-latency data exchange is gaining more and more attention in Internet of Things (IoT) applications. To achieve such stringent requirements, we propose to combine persistent connections and TLS session ticket resumption, as in WebSocket (WS) and QUIC, respectively. Considering the nodes of an IoT cluster as a single virtual entity, we propose to integrate an innovative network management strategy, which employs a publish-subscribe (Pub/Sub) architecture based on the Message Queuing Telemetry Transport (MQTT) protocol, for TLS session tickets sharing between cluster nodes to mitigate the session initialization latency. The proposed system is referred to as WS over QUIC and MQTT (WSQM) and its performance is experimentally assessed with IoT-compliant devices. Our results show that WSQM reduces the latency if compared with similar alternatives that rely on Transmission Control Protocol (TCP) and Transport Layer Security (TLS), as well as other QUIC-based protocols such as the HyperText Transfer Protocol version 3 (HTTP/3). Moreover, WSQM achieves minimal resource utilization in terms of percentage of RAM and CPU usage, thus highlighting its ability to meet the critical requirements of IoT applications. | 10.1109/TNSM.2025.3635212 |
| Keke Zheng, Mai Zhang, Mimi Qian, WaiMing Lau, Lin Cui | sketchPro: Identifying Top-k Items Based on Probabilistic Update on Programmable Data Plane | 2025 | Early Access | Accuracy Pipeline processing Hardware Telecommunication traffic Switches Probability Probabilistic logic Memory management Random access memory Pipelines Top-k items Network Measurement P4 Programmable Data Plane | Detecting the top-k heaviest items in network traffic is fundamental to traffic engineering, congestion control, and security analytics. Controller-side solutions suffer from high communication latency and heavy resource overhead, motivating the migration of this task to programmable data planes (PDP). However, PDP hardware (e.g., Tofino ASIC) offers only a few megabytes of on-chip SRAM per pipeline stage and supports neither loops nor complex arithmetic, making accurate top-k detection highly challenging. This paper proposes sketchPro, a novel sketch-based solution that employs a probabilistic update scheme to retain large items, enabling accurate top-k identification on PDP with minimal memory. sketchPro dynamically adjusts the probability of updates based on the current statistical size of the items and the frequency of hash collisions, thus allowing sketchPro to effectively detect top-k items. We have implemented sketchPro on PDP, including P4 software switch (i.e., BMv2) and hardware switch (Intel Tofino ASIC). Extensive evaluation results demonstrate that sketchPro can achieve more than 95% precision with only 10KB of memory. | 10.1109/TNSM.2025.3634742 |
| Nguyen Phuc Tran, Oscar Delgado, Brigitte Jaumard | Proactive Service Assurance in 5G and B5G Networks: A Closed-Loop Algorithm for End-to-End Network Slices | 2025 | Early Access | Resource management 5G mobile communication Quality of service Real-time systems Heuristic algorithms Optimization Dynamic scheduling Security Radio access networks Network slicing 5G Network Slice Resource Allocation Virtualized Network Functions (VNFs) Quality of Service (QoS) Proactive Resource Management Closed-Loop Control Dynamic Scaling Machine Learning in 5G and B5G networks | Ensuring the highest levels of performance and reliability for customized services in fifth-generation (5G) and beyond (B5G) networks requires the automation of resource management within network slices. In this paper, we propose PCLANSA, a proactive closed-loop algorithm that dynamically allocates and scales resources to meet the demands of diverse applications in real time for an end-to-end (E2E) network slice. In our experiment, PCLANSA was evaluated to ensure that each virtual network function is allocated the resources it requires, thereby maximizing efficiency and minimizing waste. This goal is achieved through the intelligent scaling of virtual network functions. The benefits of PCLANSA have been demonstrated across various network slice types, including eMBB, mMTC, uRLLC, and VoIP. This finding indicates the potential for substantial gains in resource utilization and cost savings, with the possibility of reducing over-provisioning by up to 54.85%. | 10.1109/TNSM.2025.3635028 |
| Haiyuan Li, Yuelin Liu, Hari Madhukumar, Amin Emami, Xueqing Zhou, Yulei Wu, Xenofon Vasilakos, Shuangyi Yan, Dimitra Simeonidou | Incremental DRL-Based Resource Management for Dynamic Network Slicing in an Urban-Wide Testbed | 2025 | Early Access | Resource management Energy consumption Servers Network slicing Heuristic algorithms Load modeling 5G mobile communication Training Dynamic scheduling Quality of service Multi-access edge computing network slicing incremental learning MADDPG testbed deployment | Multi-access edge computing provides localized resources within mobile networks to address the requirements of emerging latency-sensitive and computing-intensive applications. At the edge, dynamic requests necessitate sophisticated resource management for adaptive network slicing. This involves optimizing resource allocations, scaling functions, and load balancing to utilize only essential resources under constrained network scenarios. However, existing solutions largely assume static slice counts, ignoring the re-optimization overhead associated with management algorithms when slices fluctuate. Moreover, many approaches rely on simplified energy models that overlook intertemporal resource scheduling and are predominantly evaluated through simulations, neglecting critical practical considerations. This paper presents an incremental cooperative Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm for resource management in dynamic edge slicing. The proposed approach optimizes long-term slicing benefits by reducing delay and energy consumption while minimizing retraining overhead in response to slice variations. Furthermore, we implement an urban-wide edge computing testbed based on OpenStack and Kubernetes to validate the algorithm’s performance. Experimental results demonstrate that our incremental MADDPG method outperforms benchmark strategies in aggregated slicing utility and reduces training energy consumption by up to 50% compared to the re-optimization approach. | 10.1109/TNSM.2025.3633927 |
| Josef Koumar, Timotej Smoleň, Kamil Jeřábek, Tomáš Čejka | Comparative Analysis of Deep Learning Models for Real-World ISP Network Traffic Forecasting | 2025 | Early Access | Forecasting Telecommunication traffic Deep learning Predictive models Time series analysis Measurement Monitoring Transformers Analytical models Smoothing methods neural networks deep learning network traffic forecasting network traffic prediction network monitoring | Accurate network traffic forecasting is crucial for internet service providers to optimize resources, improve user experience, and detect anomalies. Until recently, the lack of large-scale, real-world datasets limited the fair evaluation of forecasting methods. The newly released CESNET-TimeSeries24 dataset addresses this gap by providing multivariate traffic data from thousands of devices over 40 weeks at multiple aggregation granularities and hierarchy levels. In this study, we leverage the CESNET-TimeSeries24 dataset to conduct a systematic evaluation of state-of-the-art deep learning models and provide practical insights. Moreover, our analysis reveals trade-offs between prediction accuracy and computational efficiency across different levels of granularity. Beyond model comparison, we establish a transparent and reproducible benchmarking framework, releasing source code and experiments to encourage standardized evaluation and accelerate progress in network traffic forecasting research. | 10.1109/TNSM.2025.3636557 |
| Volviane Saphir Mfogo, Alain Zemkoho, Laurent Njilla, Marcellin Nkenlifack, Charles Kamhoua | GAN-AIIPot: GAN-Based Cyber Deception for Probing Attacks on IoT Devices | 2025 | Early Access | Internet of Things Security Probes Generative adversarial networks Codes Generators Synthetic data Transformers Training Performance evaluation Generative Adversarial Networks (GAN) Internet of Things (IoT) Devices Honeypot Machine Learning Probe Attacks | The Internet of Things (IoT) is an emerging technology that has transformed the global network by interconnecting internet-enabled devices, people, intelligent things, and valuable data, leading to significant advancements in various domains. As IoT devices become more interwoven into our daily lives, the security of these devices is a huge concern. Many IoT devices are connected to the internet, making them open to security threats. Researchers have been exploring new methods for detecting and mitigating cyberattacks on IoT devices. One promising approach is the use of Generative Adversarial Networks (GANs) for cyber deception. Cyber deception is a cybersecurity technique used to mislead attackers and hackers from their intended targets. GANs have shown promise in the field of cybersecurity for creating realistic synthetic data to test the security of systems. In the case of probing attacks on IoT devices, GAN-based cyber deception can be used to create fake devices/information that can mimic real IoT devices and deceive attackers into thinking that they have successfully compromised a target. This paper proposes a novel GAN-based cyber deception technique called GAN-AIIPot, which is designed for probe attacks on IoT devices. GAN-AIIPot is an extended version of AIIPot that adds a GAN model on top of the Bidirectional Encoder Representations from the Transformers (BERT) model used in AIIPot. We evaluate our approach using a publicly available IoT dataset and show that GAN-AIIPot captures more sophisticated attacks and improves session length with attackers, showing the effectiveness of the deception technique compared to the existing honeypots. We believe that such a solution can enhance the security of IoT devices and protect them from malicious actors. | 10.1109/TNSM.2025.3632667 |
| Emilio García de la Calera Molina, Anthony J. Pogo Medina, Alejandro Molina Zarca, Pablo Fernández Saura, Antonio F. Skarmeta Gómez | Automating 5G Traffic Generation With Virtual UEs: A Scalable Network Testing Infrastructure | 2025 | Early Access | 5G mobile communication Telecommunication traffic Generators Data collection Testing Data models Artificial intelligence Anomaly detection Quality of service Forecasting 5G B5G wireless networks network security anomaly detection testbed virtual User Equipments (UEs) data collection Simbox dataset | 5G technology represents a transformative leap in wireless networks, promising advancements in smart cities, autonomous transport, and IoT through high data speeds, reduced latency, and support for numerous devices. However, realizing these benefits requires overcoming significant security challenges, particularly in anomaly detection, as the vast device flow increases the risk of vulnerabilities. While AI is critical for this task, the availability of models for AI-based 5G analysis remains limited. This paper addresses this gap by presenting a scalable research framework for generating realistic 5G traffic across both control and user planes without actual 5G devices. The framework enables non-5G devices, such as mobile phones or PCs, to connect to a real 5G RAN infrastructure via virtual User Equipments (UEs), allowing them to generate client traffic at reduced costs and without physical 5G devices. This innovative approach supports the generation of large amounts of 5G traffic, for subsequent collection and analysis to create various datasets, simulating diverse network behaviours for cybersecurity purposes. By leveraging this virtualized environment, our framework offers a versatile, cost-effective solution for comprehensive 5G data generation in a controlled setting. Results demonstrate that 5G traffic generated by non-5G devices through this framework is suitable for AI modelling and training, advancing research capabilities in anomaly detection and overall network security. | 10.1109/TNSM.2025.3632075 |
| Yuanpeng Zheng, Tiankui Zhang, Rong Huang, Yapeng Wang | Joint Computing Offloading and Resource Allocation for Classification Intelligence Tasks in MEC Systems | 2025 | Early Access | Resource management Accuracy Computational modeling Parallel processing Servers Optimization Image edge detection Delays Costs Wireless communication Computing offloading classification intelligence tasks mobile edge computing resource allocation | Mobile edge computing (MEC) facilitates high reliability and low-latency applications by bringing computation and data storage closer to end-users. Intelligent computing is an important application of MEC, where computing resources are used to solve intelligent task-related problems based on task requirements. However, efficiently offloading computing and allocating resources for intelligent tasks in MEC systems is a challenging problem due to complex interactions between task requirements and MEC resources. To address this challenge, we investigate joint computing offloading and resource allocation for classification intelligence tasks (CITs) in MEC systems. Our goal is to optimize system utility by jointly considering computing accuracy and task delay to achieve maximum utility of our system. We focus on CITs and formulate an optimization problem that considers task characteristics including the accuracy requirements and the parallel computing capabilities in MEC systems. To solve the proposed problem, we decompose it into three subproblems: subcarrier allocation, computing capacity allocation and compression offloading. We use successive convex approximation and convex optimization method to derive optimized feasible solutions for the subcarrier allocation, offloading variable, computing capacity allocation, and compression ratio. Based on our solutions, we design an efficient joint computing offloading and resource allocation algorithm for CITs in MEC systems. Our simulation demonstrates that the proposed algorithm significantly improves the performance by 16.4% on average and achieves a flexible trade-off between system revenue and cost considering CITs compared with benchmarks. | 10.1109/TNSM.2025.3632162 |