Last updated: 2025-06-10 03:01 UTC
All documents
Number of pages: 140
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Vaishnavi Kasuluru, Luis Blanco, Engin Zeydan | Enhancing Open RAN Operations: The Role of Probabilistic Forecasting in Network Analysis | 2025 | Early Access | Probabilistic logic Open RAN Forecasting Resource management Predictive models Heuristic algorithms Computer architecture Adaptation models Dynamic scheduling Uncertainty Open RAN 6G Probabilistic Forecasting Network Analytics AI | Resource provisioning plays a crucial role in effective resource management. As we move into the 6G era, technologies such as Open Radio Access Network (O-RAN) offer the opportunity to develop intelligent and interoperable cutting-edge solutions for qualitative management of the latest communication system. Previous works have mostly used single-point forecasts like Long-Short Term Memory (LSTM) for predicting resource requirements, which presents decision-makers with the problem of making informed decisions about resource allocation. On the other hand, probability-based forecasting techniques such as DeepAR, Transformer and Simple-Feed-Forward (SFF) offer new dimensions to the predictions by quantifying their uncertainties. This work shows the comprehensive comparison of single-point and probabilistic estimators and evaluates their effectiveness in predicting the actual number of Physical Resource Blocks (PRBs) needed in the context of O-RAN, especially for multi-tenant use cases. The results show the superiority of the probabilistic model in terms of various evaluation metrics. DeepAR achieves the highest accuracy, outperforming single-point and other probabilistic estimators. Based on these findings, a novel approach named Dynamic Percentile Adjustment Approach (DYNp) algorithm is proposed, which utilizes probabilistic forecasting for adaptive resource allocation. After extensive analysis, the numerical results show that the DYNp algorithm for DeepAR predictions reduces the Service Level Agreement (SLA) violation to 8% and the over-provisioning to 0.509 by dynamic percentile adaption. DYNp approach ensures that resources are allocated by efficiently handling over-and under-provisioning, making it suitable for real-time scenarios in O-RAN environments. | 10.1109/TNSM.2025.3565268 |
Awaneesh Kumar Yadav, Shalitha Wijethilaka, Madhusanka Liyanage | Blockchain-Based Cross-Operator Network Slice Authentication Protocol for 5G Communication | 2025 | Early Access | Authentication Protocols Security 5G mobile communication Blockchains Base stations Switches Network slicing Hospitals Costs Authentication Network Slicing Cross-Operator Blockchain | Network slicing enables the facilitation of diverse network requirements of different applications over a single physical network. Due to concepts such as Local 5G Operators (L5GOs), Mobile Virtual Network Operators (MVNOs), and high-frequency utilization of 5G and beyond networks, users need to switch frequently among different network slices as well as different operators than the traditional networks. Even though a couple of researches have been conducted on cross-network slice authentication, cross-operator network slice authentication is still an indeterminate research area. Also, the proposed cross-network slice authentication frameworks possess several limitations, such as vulnerability to severe attacks, high cost, the central point of failure, and the inability to support cross-operator network slice authentication. Therefore, in this research, we develop a blockchain-based cross-network slicing, cross-operator network slice authentication framework. Our framework supports the authentication for different network slices in the same operator as well as in different operators. The security properties of the proposed protocols are validated from formal (using Real-Or-Random logic, Scyther, and AVISPA validation tool) and informal security validation. The comparative analysis is conducted for known and unknown attacks to demonstrate its efficacy in terms of communication, computational, storage, and energy consumption costs. Also, a sample prototype of the protocols is implemented along with the state-of-the-art protocols to evaluate the performance of our framework. | 10.1109/TNSM.2025.3562874 |
Kaijie Wang, Zhicheng Bao, Kaijun Liu, Haotai Liang, Chen Dong, Xiaodong Xu, Lin Li | Adaptive Bitrate Video Semantic Increment Transmission System Based on Buffer and Semantic Importance | 2025 | Early Access | Streaming media Bit rate Semantic communication Encoding Heuristic algorithms Fluctuations Feature extraction Data mining Deep learning Decoding Video semantic communication increment transmission adaptive bitrate algorithm buffer | Significant progress has been made in researching video semantic communication technology and adaptive bitrate (ABR) algorithms. However, wireless network fluctuations challenge video semantic communication systems without ABR algorithms to achieve a satisfactory balance between high semantic recovery accuracy and efficient bandwidth utilization. This paper proposes an adaptive bitrate video semantic increment transmission system based on buffer and semantic importance to address this issue. Firstly, a buffer-based video semantic increment transmission system is designed to dynamically adjust the amount of video semantic data transmitted by the transmitter based on network fluctuations. Then, a novel Deep Learning and Reinforcement Learning based ABR algorithm (DR-ABR) is developed to determine the optimal video incremental ratio under the current network conditions. Furthermore, a semantic feature compression technology based on semantic importance is proposed to compress the video data according to the abovementioned ratio. Experimental results demonstrate that the proposed method outperforms traditional approaches in terms of video semantic transmission performance. | 10.1109/TNSM.2025.3563257 |
Fuhao Yang, Hua Wu, Xiangyu Zheng, Jinfeng Chen, Xiaoyan Hu, Jing Ren | A Detection Scheme for Multiplexed Asymmetric Workload DDoS Attacks in High-Speed Networks | 2025 | Early Access | Feature extraction High-speed networks Denial-of-service attack Multiplexing HTTP Cryptography Computer crime Web servers Routing Real-time systems HTTP/2 Multiplexed Asymmetric Workload DDoS high-speed network intrusion detection | The asymmetric workload attack is an application layer attack that aims to exhaust the Central Processing Unit (CPU) resources of a server. Some attackers exploit new features of the Hypertext Transfer Protocol version 2 (HTTP/2) to launch Multiplexed Asymmetric Workload DDoS (MAWD) attacks using a small number of bots, which can cause denial of service on HTTP/2 servers. Data centers in high-speed networks host a large number of web applications. However, most of the detection methods for asymmetric workload attacks rely on request semantic analysis, which cannot be applied to encrypted MAWD attack traffic in high-speed networks. Besides, traditional rate-based DDoS detection methods are ineffective in detecting MAWD because the MAWD attacks use legitimate HTTP requests, and HTTP/2 traffic is bursty in nature. This paper proposes a practical scheme to detect MAWD attacks in high-speed networks. We construct an effective feature set based on the characteristics of MAWD attacks in high-speed networks and design MAWD-HashTable (MAWD-HT) to extract features quickly. Experimental results on real traffic traces with speeds reaching Gbps demonstrate that our scheme can detect MAWD attacks within 3 seconds, with a recall rate of more than 99%, a FPR of less than 0.1%, and an acceptable resource consumption. | 10.1109/TNSM.2025.3563538 |
Yating Li, Le Wang, Liang Xue, Jingwei Liu, Xiaodong Lin | Efficient and Privacy-Enhancing Non-Interactive Periocular Authentication for Access Control Services | 2025 | Early Access | Authentication Servers Privacy Homomorphic encryption Accuracy Face recognition Security Protocols Faces Vectors Secure authentication privacy preservation access control service encrypted cosine similarity matching | Periocular authentication has emerged as an increasingly prominent approach in access control services, especially in situations of face occlusion. However, its limited feature area and low information complexity make it susceptible to tampering and identity forgery. To address these vulnerabilities, we propose a practical privacy-enhancing non-interactive periocular authentication scheme for access control services. It enables encrypted cosine similarity matching on an untrusted remote authentication server by leveraging random masking (RM) and dual-trapdoor homomorphic encryption (DTHE). Additionally, a weight control matrix is introduced to enhance authentication accuracy by assigning importance to different feature dimensions. To accommodate devices with varying trust levels, we employ adaptive authentication strategies. For trusted mobile devices, we implement secure single-factor authentication based on periocular features, while for external devices with unknown security status, we enforce a two-factor authentication mechanism integrating periocular features with tokens to mitigate unauthorized access risks. Additionally, our scheme conceals users’ true identities from external devices during authentication. Security analysis demonstrates that our solution effectively mitigates tampering and replay attacks in the network while preventing privacy leakage. As validated by experimental results, the proposed scheme enables efficient authentication of obscured faces. | 10.1109/TNSM.2025.3563607 |
Junfeng Tian, Yiting Wang, Yue Shen | A Security-Enhanced Certificateless Anonymous Authentication Scheme with High Computational Efficiency for Mobile Edge Computing | 2025 | Early Access | Security Servers Authentication Computational efficiency Costs Protocols Mobile handsets Cloud computing Low latency communication Faces Authentication and key agreement (AKA) mobile edge computing (MEC) certificateless cryptography unlinkability full anonymity | Mobile Edge Computing (MEC), as a new computing paradigm, provides high-quality and low-latency services for mobile users and also reduces the load on cloud servers. However, MEC faces some security threats, such as data leakage, privacy leakage, and unauthorized access. To cope with these threats, many researchers have designed a series of identity-based authentication and key negotiation (ID-AKA) schemes for MEC environments. However, these schemes have some drawbacks, such as key escrow issues, lack of unlinkability and full anonymity, use of time-consuming bilinear pairing operations and insufficiently secure static public-private key pairs. To compensate for these drawbacks, we propose a certificateless anonymous authentication scheme for MEC with enhanced security and high computational efficiency. The scheme achieves unlinkability and full anonymity by using one-time pseudonyms generated by a tamper-proof device (TPD). The scheme also solves the key escrow problem and uses one-time public-private key pairs for authentication, thus enhancing the key security and communication security. In addition, the scheme eliminates bilinear pairing operations and precomputes some time-consuming operations in the TPD each time, thus optimizing the computational efficiency. Finally, we perform the security analysis and performance evaluation of the scheme. The results show that the scheme has the optimal computational efficiency and moderate communication costs, as well as significant advantages in terms of security, as compared to other competing schemes. | 10.1109/TNSM.2025.3563637 |
Wenxian Li, Pingang Cheng, Yong Feng, Nianbo Liu, Ming Liu, Yingna Li | A Blockchain-Assisted Hierarchical Data Aggregation Framework for IIoT With Computing First Networks | 2025 | Early Access | Industrial Internet of Things Blockchains Data privacy Data aggregation Cloud computing Collaboration Servers Edge computing Resource management Protection Industrial internet of things(IIoT) computing first networks(CFN) secure data aggregation blockchain | With an increasing number of sensor devices connected to industrial systems, the efficient and reliable aggregation of sensor data has become a key topic in Industrial Internet of Things (IIoT). Computing First Networks (CFN) are emerging as a promising technology for aggregating vast quantities of IIoT data. However, existing CFN data collection frameworks are usually centralized, which overly rely on third-party trusted authorities and fail to fully schedule and utilize limited computing resources. More critically, that is prone to trust and security issues. In this paper, considering the heterogeneity and data security in complex industrial scenarios, we propose a blockchain-based and multi-edge CFN collaborative IIoT data hierarchical collection framework (ME-CIDC) to collect massive IIoT data securely and efficiently. In ME-CIDC, a blockchain-driven resource allocation algorithm is proposed for inter-domain CFN, which achieves distributed and efficient task scheduling and data collection by constructing multiple blockchains. A self-incentive mechanism is designed to encourage inter-domain nodes to contribute resources and support the operation of the inter-domain CFN. We also propose an efficient double-layered data aggregation algorithm, which distributes computational tasks across two layers to ensure the efficient collection and aggregation of IIoT data. Extensive simulation and numerical results demonstrate the effectiveness of our proposed scheme. | 10.1109/TNSM.2025.3563237 |
Jin-Xian Liu, Jenq-Shiou Leu | ETCN-NNC-LB: Ensemble TCNs With L-BFGS-B Optimized No Negative Constraint-Based Forecasting for Network Traffic | 2025 | Early Access | Telecommunication traffic Forecasting Predictive models Data models Convolutional neural networks Long short term memory Ensemble learning Complexity theory Accuracy Overfitting Deep learning ensemble learning network traffic prediction temporal convolutional network (TCN) time series forecasting | With the increasing demand for internet access and the advent of technologies such as 6G and IoT, efficient and dynamic network resource management has become crucial. Accurate network traffic prediction plays a pivotal role in this context. However, existing prediction methods often struggle with challenges such as complexity-accuracy trade-offs, limited data availability, and diverse traffic patterns, especially in coarsegrained forecasting. To address these issues, this article proposes ETCN-NNC-LB, which is a novel ensemble learning method for network traffic forecasting. ETCN-NNC-LB combines Temporal Convolutional Networks (TCNs) with No Negative Constraint Theory (NNCT) weight integration in ensemble learning and is optimized using the Limited-memory Broyden-Fletcher-Goldfarb- Shanno with Box constraints (L-BFGS-B) algorithm. This method balances model complexity and accuracy, mitigates overfitting risks, and flexibly aggregates predictions. The ETCN-NNC-LB model also incorporates a pattern-handling method to forecast traffic behaviors robustly. Experiments on a real-world dataset demonstrate that ETCN-NNC-LB significantly outperforms stateof-the-art methods, achieving an approximately 22% reduction in the Root Mean Square Error (RMSE). The proposed method provides accurate and efficient network traffic prediction in dynamic, data-constrained environments. | 10.1109/TNSM.2025.3563978 |
Ying-Dar Lin, Yin-Tao Ling, Yuan-Cheng Lai, Didik Sudyana | Reinforcement Learning for AI as a Service: CPU-GPU Task Scheduling for Preprocessing, Training, and Inference Tasks | 2025 | Early Access | Artificial intelligence Graphics processing units Training Computer architecture Optimal scheduling Scheduling Real-time systems Complexity theory Resource management Inference algorithms AI as a Service CPU GPU Task Scheduling Reinforcement Learning Deep Q-Learning | The rise of AI solutions has driven the emergence of AI as a Service (AIaaS), offering cost-effective and scalable solutions by outsourcing AI functionalities to specialized providers. Within AIaaS, three key components are essential: segmenting AI services into preprocessing, training, and inference tasks; utilizing GPU-CPU heterogeneous systems where GPUs handle parallel processing and CPUs manage sequential tasks; and minimizing latency in a distributed architecture consisting of cloud, edge, and fog computing. Efficient task scheduling is crucial to optimize performance across these components. In order to enhance task scheduling in AIaaS, we propose a user-experience-and-performance-balanced reinforcement learning (UXP-RL) algorithm. The UXP-RL algorithm considers 11 factors, including queuing task information. It then estimates resource release times and observes previous action outcomes, to select the optimal AI task for execution on either a GPU or CPU. This method effectively reduces the average turnaround time, particularly for rapid inference tasks. Our experimental findings show that the proposed RL-based scheduling algorithm reduces average turnaround time by 27.66% to 57.81% compared to the heuristic approaches such as SJF and FCFS. In a distributed architecture, utilizing distributed RL schedulers reduces the average turnaround time by 89.07% compared to a centralized scheduler. | 10.1109/TNSM.2025.3564480 |
Pavlos S. Bouzinis, Panagiotis Radoglou-Grammatikis, Ioannis Makris, Thomas Lagkas, Vasileios Argyriou, Georgios Th. Papadopoulos, Panagiotis Sarigiannidis, George K. Karagiannidis | StatAvg: Mitigating Data Heterogeneity in Federated Learning for Intrusion Detection Systems | 2025 | Early Access | Servers Training Intrusion detection Data models Europe Feature extraction Privacy Data augmentation Convolutional neural networks Batch normalization cybersecurity intrusion detection systems federated learning data heterogeneity statistical averaging | Federated learning (FL) enables devices to collaboratively build a shared machine learning (ML) or deep learning (DL) model without exposing raw data. Its privacy-preserving nature has made it popular for intrusion detection systems (IDS) in the field of cybersecurity. However, data heterogeneity across participants poses challenges for FL-based IDS. This paper proposes statistical averaging (StatAvg) method to alleviate non-independently and identically (non-iid) distributed features across local clients’ data in FL. In particular, StatAvg allows the FL clients to share their individual local data statistics with the server. These statistics include the mean and variance of each client’s feature vector. The server then aggregates this information to produce global statistics, which are shared with the clients and used for universal data normalization, i.e., common scaling of the input features by all clients. It is worth mentioning that StatAvg can seamlessly integrate with any FL aggregation strategy, as it occurs before the actual FL training process. The proposed method is evaluated against well-known baseline approaches that rely on batch and layer normalization, such as FedBN, and address the non-iid features issue in FL. Experiments were conducted using the TON-IoT and CIC-IoT-2023 datasets, which are relevant to the design of host and network IDS, respectively. The experimental results demonstrate the efficiency of StatAvg in mitigating non-iid feature distributions across the FL clients compared to the baseline methods, offering a gain in IDS accuracy ranging from 4% to 17%. | 10.1109/TNSM.2025.3564387 |
Yi-Huai Hsu, Chen-Fan Chang, Chao-Hung Lee | A DRL Based Spectrum Sharing Scheme for multi-MNO in 5G and Beyond | 2025 | Early Access | Resource management 5G mobile communication Long Term Evolution Games Telecommunication traffic Pricing Internet of Things Deep reinforcement learning Wireless fidelity Training 5G spectrum sharing mobile network operator deep reinforcement learning | In spectrum pooling, which is a well-known technique of spectrum sharing, the initial licensed spectrum of each Mobile Network Operator (MNO) is partitioned into reserved and shared spectrum. The reserved spectrum is for the personal use of an MNO, and the shared spectrum of all MNOs constitutes a spectrum pool that can be flexibly utilized by MNOs that require extra spectrum. Nevertheless, the spectrum pool management problem substantially impacts the spectrum efficiency among these MNOs. In this paper, we formulate this problem as a non-linear programming problem that strives to maximize the average binary scale satisfaction (BSS) of MNOs. To achieve this objective, we introduce an event-driven deep reinforcement learning-based spectrum management scheme, termed EDRL-SMS. This approach adopts a spectrum pool manager (SPM) to efficiently supervise the spectrum pool to reach long-term optimization of network performance. The SPM smartly allocates spectrum resources by fully utilizing a DRL approach, Deep Deterministic Policy Gradient, for each stochastic arrival spectrum request event. The simulation results show that the average BSS of MNOs of the proposed EDRL-SMS significantly outperform our previous work, Bankruptcy Game-based Resource Allocation (BGRA), greedy, random, and without sharing schemes. | 10.1109/TNSM.2025.3562968 |
Wenjing Gao, Jia Yu | Enabling Privacy-Preserving Top-k Hamming Distance Query on the Cloud | 2025 | Early Access | Hamming distances Protocols Cloud computing Servers Encryption Games Computational modeling Social networking (online) Privacy Social groups Cloud computing cloud security privacy preserving Hamming distance top-k query | The top-k Hamming distance query is to find the k optimal objects with the smallest Hamming distance to the query data. It has a wide range of applications in many domains such as social networks, image retrieval and biological recognition. The existing privacy-preserving protocols do not support the top-k Hamming distance query in practice. To address this issue, we consider letting the user securely query the top-k Hamming distance on the cloud in a secure outsourcing manner. We propose two protocols to realize the privacy-preserving top-k Hamming distance query on the cloud. In the first protocol, two cloud servers are introduced to cooperatively complete the privacy-preserving top-k Hamming distance query. To preserve data privacy, the Paillier encryption and randomization techniques are leveraged to blind the user data, and the ciphertext data is stored on the first cloud server. The second cloud server calculates the Hamming distance on the ciphertexts. After that, the encrypted query results are returned to the query user for recovering the top-k query results. In the second protocol, we adopt the data aggregation strategy to further enhance the efficiency. By packaging data, the computation overhead of each participant is reduced and the communication overhead of the protocol is decreased, remarkably. Security analysis demonstrates that the data privacy is guaranteed in the proposed protocols. Experimental results evaluate the performance of the proposed protocols and confirm the superiority of the second protocol. | 10.1109/TNSM.2025.3565943 |
Ruopeng Geng, Jianyuan Lu, Chongrong Fang, Shaokai Zhang, Jiangu Zhao, Zhigang Zong, Biao Lyu, Shunmin Zhu, Peng Cheng, Jiming Chen | Enabling Stateful TCP Performance Profiling with Key Event Capturing | 2025 | Early Access | Kernel Production Probes Filtering Training Servers Packet loss Linux Electronic mail Data communication Network Measurement TCP Performance Profiling TCP Stack Probes | TCP ensures reliable transmission through its stateful implementation and remains crucial today. TCP performance profiling is essential for tasks like diagnosing network performance problems, optimizing transmission performance, and developing new TCP variants, etc. Existing profiling methods lack enough attention to TCP state transition to provide detailed insights on TCP performance. Thus, we build TcpSight, a tool focusing on TCP state transition throughout connection lifetimes. TcpSight conducts stateful analysis by capturing key events using an efficient per-connection lock-free data management mechanism. Besides, TcpSight enhances profiling by integrating application layer information collected from the TCP stack. With the profiling results, users can identify the culprit of TCP performance degradation, and evaluate the performance of TCP algorithms. We design optional modules and filtering mechanisms to reduce TcpSights overhead. Our evaluation presents that TcpSight incurs an additional CPU consumption of about 16.6% (without filtering) and 10.6% (with filtering) when the servers load is 55.7%, and generates storage consumption about 1.88 KB per connection on average. We also give application cases of TcpSight and the deployment experiences in Alibaba Cloud. TcpSight helps in revealing meaningful findings and insights into exploiting TCP in the production deployment. | 10.1109/TNSM.2025.3564336 |
Genxin Chen, Jin Qi, Jialin Hua, Ying Sun, Zhenjiang Dong, Yanfei Sun | AFAS: Arbitrary-Freedom Adaptive Scheduling for Multiworkflow Cloud Computing via Deep Reinforcement Learning | 2025 | Early Access | Processor scheduling Cloud computing Computational modeling Dynamic scheduling Optimization Deep reinforcement learning Adaptive scheduling Adaptation models Real-time systems Costs Cloud computing Deep reinforcement learning Multiple workflows Arbitrary freedom Adaptive scheduling | The in-depth development of artificial intelligence models has supported the high-quality allocation of cloud computing resources. The optimization of workflow scheduling issues in cloud computing has become increasingly critical due to the complexity of computing tasks, constraints on computing resources, and the growing demand for high-quality service. To address the increasingly complex workflow scheduling problems in cloud computing, this paper presents an arbitrary-freedom adaptive scheduling method for cloud computing with multiple workflows based on deep reinforcement learning (termed AFAS), with the workflow makespan and response time as the optimization objectives. First, we define the concept of degrees of freedom in the scheduling context to establish the feature space and foundational decision patterns relevant to multiworkflow scheduling. Second, an adaptive real-time scheduling strategy generation (ARS) algorithm is proposed for multiworkflow scheduling tasks. Third, a composite reward mechanism with an advanced-time-window real-time-reward (ATR) algorithm is designed for intelligent model optimization. Finally, the generation algorithm and intelligent model are fused to perform arbitrary-freedom multiworkflow adaptive scheduling. The experiments show that ATR can significantly increase the frequency of reward generation, AFAS can achieve at least 6.6% better performance than existing methods can achieve, and the incorporation of intelligent models improves the performance of ARS by 2.7%. | 10.1109/TNSM.2025.3566771 |
Jinshui Wang, Yao Xin, Chongwu Dong, Lingfeng Qu, Yiming Ding | ERPC: Efficient Rule Partitioning Through Community Detection for Packet Classification | 2025 | Early Access | Decision trees Classification algorithms Partitioning algorithms Manuals Tuning Optimization Throughput IP networks Memory management Vectors Packet Classification rule partitioning decision tree community detection graph coloring | Packet classification is crucial for network security, traffic management, and quality of service by enabling efficient identification and handling of data packets. Decision tree-based rule partitioning has emerged as a prominent method in recent research. A significant challenge for decision tree algorithms is rule replication, which occurs when rules span multiple subspaces, leading to substantial memory consumption increases. Rule partitioning can effectively mitigate or eliminate this replication by separating overlapping rules. However, existing partitioning techniques heavily rely on manual parameter tuning across a wide range of possible values, making optimal solution discovery challenging. Furthermore, due to the lack of global optimization, these approaches face a critical trade-off: either the number of subsets becomes uncontrollable, resulting in diminished query speed, or rule replication becomes severe, causing substantial memory overhead. To bridge these gaps and achieve high-performance adaptive partitioning, we propose ERPC, a novel algorithm with the following key features: First, ERPC leverages graph theory to model rule sets, enabling global optimization that balances intra-group rule replication against the total number of groups. Second, ERPC advances rule set partitioning by modifying traditional community detection algorithms, strategically shifting the optimization objective from positive to negative modularity. Third, ERPC allows the rule set itself to determine the optimal number of groups, thus eliminating the need for manual parameter tuning. Experimental results demonstrate the efficacy of ERPC when applied to CutSplit, a state-of-the-art multi-tree method. It preserves 88% of CutSplit’s average classification throughput while reducing tree-building time by 89% and memory consumption by 77%. Furthermore, ERPC exhibits strong scalability, being adaptable to mainstream decision tree methods. | 10.1109/TNSM.2025.3567705 |
Karcius D. R. Assis, Raul C. Almeida, Hojjat Baghban, Alex F. Santos, Raouf Boutaba | A Two-stage Reconfiguration in Network Function Virtualization: Toward Service Function Chain Optimization | 2025 | Early Access | Resource management Optimization Service function chaining Substrates Network function virtualization Servers Scalability Real-time systems Virtual machines Routing Service Function Chain Reconfiguration Optimization Network Function Virtualization VNF Migration | Network Function Virtualization (NFV), as a promising paradigm, speeds up the service deployment by separating network functions from proprietary devices and deploying them on common servers in the form of software. Any service in NFV-enabled networks is achieved as a Service Function Chain (SFC) which consists of a series of ordered Virtual Network Functions (VNFs). However, migration of VNFs for more flexible services within a dynamic NFV-enabled network is a key challenge to be addressed. Current VNF migration studies mainly focus on single VNF migration decisions without considering the sharing and concurrent migration of VNF instances. In this paper, we assume that each deployed VNF is used by multiple SFCs and deal with the optimal placement for the contemporaneous migration of VNFs based on the actual network situation. We formalize the VNF migration and SFC reconfiguration problem as a mathematical model, which aims to minimize the VNF migration between nodes or the total number of core changes per node. The approach is a two-stage MILP based on optimal order to solve the reconfiguration. Extensive evaluation shows that the proposed approach can reduce the change in terms of location or number of cores per node in a 6-node and 14-node networks while ensuring network latency compared with the model without reconfiguration. | 10.1109/TNSM.2025.3567906 |
Antonino Angi, Alessio Sacco, Guido Marchetto | LLNet: An Intent-Driven Approach to Instructing Softwarized Network Devices Using a Small Language Model | 2025 | Early Access | Translation Natural language processing Codes Accuracy Training Programming Pipelines Network topology Energy consumption Data mining user intents LLM SLM network programmability intent-based networking SDN | Traditional network management requires manual coding and expertise, making it challenging for non-specialists and experts to handle increasing devices and applications. In response, Intent-Based Networking (IBN) has been proposed to simplify network operations by allowing users to express in natural language the program objective (or intent), which is then translated into device-specific configurations. The emergence of Large Language Models (LLMs) has boosted the capabilities to interpret human intents, with recent IBN solutions embracing LLMs for a more accurate translation. However, while these solutions excel at intent comprehension, they lack a complete pipeline that can receive user intents and deploy network programs across devices programmed in multiple languages. In this paper, we present LLNet, our IBN solution that, within the context of Software-Defined Networking (SDN), can translate seamlessly intent-to-program. First, leveraging LLMs, we convert network intents into an intermediate representation by extracting key information; then, using this output, the system can tailor the network code for any topology using the specific language calls. At the same time, we address the challenge of a more sustainable IBN approach to reduce its energy consumption, and we experience how even a Small Language Model (SLM) can efficiently help LLNet for input translation. Results across multiple use cases demonstrated how our solution can guarantee adequate translation accuracy while reducing operator expenses compared to other LLM-based approaches. | 10.1109/TNSM.2025.3570017 |
Zhenxing Chen, Pan Gao, Teng-Fei Ding, Zhi-Wei Liu, Ming-Feng Ge | Practical Prescribed-Time Resource Allocation of NELAs with Event-Triggered Communication and Input Saturation | 2025 | Early Access | Resource management Event detection Costs Convergence Vehicle dynamics Mathematical models Training Perturbation methods Neurons Laplace equations Practical resource allocation networked Euler-Lagrange agents prescribed-time event-triggered communication input saturation | This paper investigates the practical resource allocation of networked Euler-Lagrange agents (NELAs) with event-triggered communication and input saturation. A novel prescribed-time resource allocation control (PTRAC) algorithm is proposed, which includes a resource allocation estimator and a prescribed-time NN-based local controller. The former is designed based on the time-based generator (TBG) and event-triggered mechanism to achieve the optimal resource allocation within the prescribed-time. Additionally, the prescribed-time NN-based local controller is designed using the approximation ability of RBF neural network to force the states of NELAs to track the optimal values within the prescribed-time. The most significant feature of the PTRAC algorithm is that the complex problem can be analyzed independently in chunks and converges within the prescribed-time, greatly reducing the number of triggers and communication costs. Subsequently, the validity is verified by simulation and several sufficient conditions are established via the Lyapunov stability argument. | 10.1109/TNSM.2025.3570091 |
Xueqi Peng, Wenting Shen, Yang Yang, Xi Zhang | Secure Deduplication and Cloud Storage Auditing with Efficient Dynamic Ownership Management and Data Dynamics | 2025 | Early Access | Cloud computing Encryption Data integrity Vehicle dynamics Protocols Indexes Security Data privacy Servers Costs Cloud storage Integrity auditing Secure deduplication Ownership management Data dynamics | To verify the integrity of data stored in the cloud and improve storage efficiency, numerous cloud storage auditing schemes with deduplication have been proposed. In cloud storage, when users perform data dynamic operations, they should lose ownership of original data. However, existing schemes require re-encrypting the entire ciphertext when ownership changes and recalculating the authenticators for the blocks following the updated blocks when insertion or deletion operations are performed. These processes lead to high computation overhead. To address the above issues, we construct a secure deduplication and cloud storage auditing scheme with efficient dynamic ownership management and data dynamics. We adopt CAONT encryption method, where only a portion of the updated block is required to be re-encrypted during the ownership management phase, significantly reducing computation overhead. We also implement index switch sets to maintain the mapping between block indexes and cloud storage indexes of ciphertext blocks. By embedding cloud storage indexes within the authenticators, our scheme avoids the need to recalculate authenticators when users perform dynamic operations. Additionally, our scheme supports block-level deduplication, further improving efficiency. Through comprehensive security analysis and experiments, we validate the security and effectiveness of the proposed scheme. | 10.1109/TNSM.2025.3569833 |
Cyril Shih-Huan Hsu, Jorge Martín-Pérez, Danny De Vleeschauwer, Luca Valcarenghi, Xi Li, Chrysa Papagianni | A Deep RL Approach on Task Placement and Scaling of Edge Resources for Cellular Vehicle-to-Network Service Provisioning | 2025 | Early Access | Delays Resource management Optimization Vehicle dynamics Urban areas Real-time systems Vehicle-to-everything Forecasting Deep reinforcement learning Transportation cellular vehicle to network task placement edge resource scaling deep reinforcement learning | Cellular Vehicle-to-Everything (C-V2X) is currently at the forefront of the digital transformation of our society. By enabling vehicles to communicate with each other and with the traffic environment using cellular networks, we redefine transportation, improving road safety and transportation services, increasing the efficiency of vehicular traffic flows, and reducing environmental impact. To effectively facilitate the provisioning of Cellular Vehicular-to-Network (C-V2N) services, we tackle the interdependent problems of service task placement and scaling of edge resources. Specifically, we formulate the joint problem and prove that it is not computationally tractable. To address its complexity we propose dhpg, a new Deep Reinforcement Learning (DRL) approach that operates in hybrid action spaces, enabling holistic decision-making and enhancing overall performance. We evaluated the performance of DHPG using simulations with a real-world C-V2N traffic dataset, comparing it to several state-of-the-art (SoA) solutions. DHPG outperforms these solutions, guaranteeing the 99th percentile of C-V2N service delay target, while simultaneously optimizing the utilization of computing resources. Finally, time complexity analysis is conducted to verify that the proposed approach can support real-time C-V2N services. | 10.1109/TNSM.2025.3570102 |