Last updated: 2025-06-11 03:01 UTC
All documents
Number of pages: 140
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Jian An, Siyu Tang, Ruyuan Ping, Ran Li, Xin He, Xiaolong Jin | Fed RDP: A Robust Federated Learning Framework for Multi Party Decision-Making | 2025 | Early Access | Training Federated learning Blockchains Data models Privacy Differential privacy Peer-to-peer computing Security Noise Decision making Robust Federated Learning Participant Evaluation DPoS Consensus Differential Privacy | In the field of intelligent manufacturing, the vast and heterogeneous nature of data across different departments poses significant challenges for collaborative decision-making. Direct data sharing often leads to severe privacy and security concerns. Although federated learning offers a promising solution, issues such as participant selection and privacy protection remain inadequately addressed. During model training, it is crucial to minimize the influence of low-quality participants and prevent the inference of sensitive information through gradient analysis, which could threaten model performance or data privacy. To address these challenges, this paper proposes a robust federated learning model, Fed-RDP, based on participant contribution evaluation and personalized differential privacy. The model leverages blockchain technology for secure data flow and storage, implemented through smart contracts. Historical parameters are submitted to evaluate contributions based on the concept of the least core, with real-time updates to reputation scores. When participants upload their local models, the scores are used as weights to aggregate and update the global model. Additionally, personalized differential noise is added to the uploaded gradients based on participant scores, preserving privacy while maximizing the utility of the data. Experimental results demonstrate that this approach effectively identifies low-quality participants, optimizes evaluation time, and protects privacy through personalized differential noise. | 10.1109/TNSM.2025.3557850 |
Shashank Motepalli, Hans-Arno Jacobsen | Decentralization in PoS Blockchain Consensus: Quantification and Advancement | 2025 | Early Access | Consensus protocol Blockchains Measurement Safety Indexes Adaptation models Security Probabilistic logic Bitcoin Analytical models Blockchains Decentralized applications Decentralized applications | Decentralization is a foundational principle of permissionless blockchains, with consensus mechanisms serving a critical role in its realization. This study quantifies the decentralization of consensus mechanisms in proof-of-stake (PoS) blockchains using a comprehensive set of metrics, including Nakamoto coefficients, Gini, Herfindahl-Hirschman Index (HHI), Shapley values, and Zipf’s coefficient. Our empirical analysis across ten prominent blockchains reveals significant concentration of stake among a few validators, posing challenges to fair consensus. To address this, we introduce two alternative weighting models for PoS consensus: Square Root Stake Weight (SRSW) and Logarithmic Stake Weight (LSW), which adjust validator influence through non-linear transformations. Results demonstrate that SRSW and LSW models improve decentralization metrics by an average of 51% and 132%, respectively, supporting more equitable and resilient blockchain systems. | 10.1109/TNSM.2025.3561098 |
Xiaojie Zhang, Saptarshi Debroy, Peng Wang, Keqin Li | DeepRB: Deep Resource Broker Based on Clustered Federated Learning for Edge Video Analytics | 2025 | Early Access | Internet of Things Resource management Visual analytics Heuristic algorithms Edge computing Real-time systems Smart cities Optimization Computational modeling Scalability Energy efficiency edge computing resource management service placement real-time video analytics | Edge computing plays a crucial role in large-scale and real-time video analytics for smart cities, particularly in environments with massive machine-type communications (mMTC) among IoT devices. Due to the dynamic nature of mMTC, one of the main challenges is to achieve energy-efficient resource allocation and service placement in resource-constrained edge computing environments. In this paper, we introduce DeepRB, a deep learning-based resource broker framework designed for real-time video analytics in edge-native environments. DeepRB develops a two-stage algorithm to address both resource allocation and service placement efficiently. First, it uses a Residual Multilayer Perceptron ( ResMLP) network to approximate traditional iterative resource allocation policies for IoT devices that frequently transition between active and idle states. Second, for service placement, DeepRB leverages a multi-agent federated deep reinforcement learning (DRL) approach that incorporates clustering and knowledge-aware model aggregation. Through extensive simulations, we demonstrate the effectiveness of DeepRB in improving schedulability and scalability compared to baseline edge resource management algorithms. Our results highlight the potential of DeepRB for optimizing resource allocation and service placement for real-time video analytics in dynamic and resource-constrained edge computing environments. | 10.1109/TNSM.2025.3560657 |
Karcius D. R. Assis, Raul C. Almeida, Hojjat Baghban, Alex F. Santos, Raouf Boutaba | A Two-stage Reconfiguration in Network Function Virtualization: Toward Service Function Chain Optimization | 2025 | Early Access | Resource management Optimization Service function chaining Substrates Network function virtualization Servers Scalability Real-time systems Virtual machines Routing Service Function Chain Reconfiguration Optimization Network Function Virtualization VNF Migration | Network Function Virtualization (NFV), as a promising paradigm, speeds up the service deployment by separating network functions from proprietary devices and deploying them on common servers in the form of software. Any service in NFV-enabled networks is achieved as a Service Function Chain (SFC) which consists of a series of ordered Virtual Network Functions (VNFs). However, migration of VNFs for more flexible services within a dynamic NFV-enabled network is a key challenge to be addressed. Current VNF migration studies mainly focus on single VNF migration decisions without considering the sharing and concurrent migration of VNF instances. In this paper, we assume that each deployed VNF is used by multiple SFCs and deal with the optimal placement for the contemporaneous migration of VNFs based on the actual network situation. We formalize the VNF migration and SFC reconfiguration problem as a mathematical model, which aims to minimize the VNF migration between nodes or the total number of core changes per node. The approach is a two-stage MILP based on optimal order to solve the reconfiguration. Extensive evaluation shows that the proposed approach can reduce the change in terms of location or number of cores per node in a 6-node and 14-node networks while ensuring network latency compared with the model without reconfiguration. | 10.1109/TNSM.2025.3567906 |
Antonino Angi, Alessio Sacco, Guido Marchetto | LLNet: An Intent-Driven Approach to Instructing Softwarized Network Devices Using a Small Language Model | 2025 | Early Access | Translation Natural language processing Codes Accuracy Training Programming Pipelines Network topology Energy consumption Data mining user intents LLM SLM network programmability intent-based networking SDN | Traditional network management requires manual coding and expertise, making it challenging for non-specialists and experts to handle increasing devices and applications. In response, Intent-Based Networking (IBN) has been proposed to simplify network operations by allowing users to express in natural language the program objective (or intent), which is then translated into device-specific configurations. The emergence of Large Language Models (LLMs) has boosted the capabilities to interpret human intents, with recent IBN solutions embracing LLMs for a more accurate translation. However, while these solutions excel at intent comprehension, they lack a complete pipeline that can receive user intents and deploy network programs across devices programmed in multiple languages. In this paper, we present LLNet, our IBN solution that, within the context of Software-Defined Networking (SDN), can translate seamlessly intent-to-program. First, leveraging LLMs, we convert network intents into an intermediate representation by extracting key information; then, using this output, the system can tailor the network code for any topology using the specific language calls. At the same time, we address the challenge of a more sustainable IBN approach to reduce its energy consumption, and we experience how even a Small Language Model (SLM) can efficiently help LLNet for input translation. Results across multiple use cases demonstrated how our solution can guarantee adequate translation accuracy while reducing operator expenses compared to other LLM-based approaches. | 10.1109/TNSM.2025.3570017 |
Zhenxing Chen, Pan Gao, Teng-Fei Ding, Zhi-Wei Liu, Ming-Feng Ge | Practical Prescribed-Time Resource Allocation of NELAs with Event-Triggered Communication and Input Saturation | 2025 | Early Access | Resource management Event detection Costs Convergence Vehicle dynamics Mathematical models Training Perturbation methods Neurons Laplace equations Practical resource allocation networked Euler-Lagrange agents prescribed-time event-triggered communication input saturation | This paper investigates the practical resource allocation of networked Euler-Lagrange agents (NELAs) with event-triggered communication and input saturation. A novel prescribed-time resource allocation control (PTRAC) algorithm is proposed, which includes a resource allocation estimator and a prescribed-time NN-based local controller. The former is designed based on the time-based generator (TBG) and event-triggered mechanism to achieve the optimal resource allocation within the prescribed-time. Additionally, the prescribed-time NN-based local controller is designed using the approximation ability of RBF neural network to force the states of NELAs to track the optimal values within the prescribed-time. The most significant feature of the PTRAC algorithm is that the complex problem can be analyzed independently in chunks and converges within the prescribed-time, greatly reducing the number of triggers and communication costs. Subsequently, the validity is verified by simulation and several sufficient conditions are established via the Lyapunov stability argument. | 10.1109/TNSM.2025.3570091 |
Xueqi Peng, Wenting Shen, Yang Yang, Xi Zhang | Secure Deduplication and Cloud Storage Auditing with Efficient Dynamic Ownership Management and Data Dynamics | 2025 | Early Access | Cloud computing Encryption Data integrity Vehicle dynamics Protocols Indexes Security Data privacy Servers Costs Cloud storage Integrity auditing Secure deduplication Ownership management Data dynamics | To verify the integrity of data stored in the cloud and improve storage efficiency, numerous cloud storage auditing schemes with deduplication have been proposed. In cloud storage, when users perform data dynamic operations, they should lose ownership of original data. However, existing schemes require re-encrypting the entire ciphertext when ownership changes and recalculating the authenticators for the blocks following the updated blocks when insertion or deletion operations are performed. These processes lead to high computation overhead. To address the above issues, we construct a secure deduplication and cloud storage auditing scheme with efficient dynamic ownership management and data dynamics. We adopt CAONT encryption method, where only a portion of the updated block is required to be re-encrypted during the ownership management phase, significantly reducing computation overhead. We also implement index switch sets to maintain the mapping between block indexes and cloud storage indexes of ciphertext blocks. By embedding cloud storage indexes within the authenticators, our scheme avoids the need to recalculate authenticators when users perform dynamic operations. Additionally, our scheme supports block-level deduplication, further improving efficiency. Through comprehensive security analysis and experiments, we validate the security and effectiveness of the proposed scheme. | 10.1109/TNSM.2025.3569833 |
Jaime Galán-Jiménez, Marco Polverini, Juan Luis Herrera, Francesco G. Lavacca, Javier Berrocal | ELTO: Energy Efficiency-Load balancing Trade-Off Solution to Handle With Conflicting Metrics in Hybrid IP/SDN Scenarios | 2025 | Early Access | Switches Energy consumption Energy efficiency Load management IP networks Routing Control systems Optimization Heating systems Telecommunication traffic Load balancing energy efficiency IP SDN ILP | Next-generation applications, marked by their critical nature, need to cope with stringent Quality of Service (QoS) requirements, such as low response time and high throughput. Moreover, the increasing number of devices connected to the Internet and the need to provide a consistent network infrastructure to serve the applications requested by users, open the tradeoff of jointly considering the QoS improvement for such applications and the reduction in the energy consumption of the infrastructure. To address this challenge, this paper proposes ELTO (Energy-Load Trade-Off), a system designed for the joint optimization of energy efficiency and traffic load balancing during the transition from IP networks to Software-Defined Networks (SDN). Leveraging SDN and Network Function Virtualization (NFV) paradigms, ELTO introduces an Integer Linear Programming multi-objective formulation, and a Genetic Algorithm heuristic to tackle the optimization problem in large-scale scenarios. ELTO encompasses a holistic approach to network configuration, including network equipment status and routing, to strike a balance between network traffic load balancing and energy efficiency. Results over realistic topologies show the effectiveness of the proposed solution, outperforming other state-of-the-art approaches, being able to switch off nearly half of the links in the network while also reducing the Maximum Link Utilization. | 10.1109/TNSM.2025.3559422 |
Cyril Shih-Huan Hsu, Jorge Martín-Pérez, Danny De Vleeschauwer, Luca Valcarenghi, Xi Li, Chrysa Papagianni | A Deep RL Approach on Task Placement and Scaling of Edge Resources for Cellular Vehicle-to-Network Service Provisioning | 2025 | Early Access | Delays Resource management Optimization Vehicle dynamics Urban areas Real-time systems Vehicle-to-everything Forecasting Deep reinforcement learning Transportation cellular vehicle to network task placement edge resource scaling deep reinforcement learning | Cellular Vehicle-to-Everything (C-V2X) is currently at the forefront of the digital transformation of our society. By enabling vehicles to communicate with each other and with the traffic environment using cellular networks, we redefine transportation, improving road safety and transportation services, increasing the efficiency of vehicular traffic flows, and reducing environmental impact. To effectively facilitate the provisioning of Cellular Vehicular-to-Network (C-V2N) services, we tackle the interdependent problems of service task placement and scaling of edge resources. Specifically, we formulate the joint problem and prove that it is not computationally tractable. To address its complexity we propose dhpg, a new Deep Reinforcement Learning (DRL) approach that operates in hybrid action spaces, enabling holistic decision-making and enhancing overall performance. We evaluated the performance of DHPG using simulations with a real-world C-V2N traffic dataset, comparing it to several state-of-the-art (SoA) solutions. DHPG outperforms these solutions, guaranteeing the 99th percentile of C-V2N service delay target, while simultaneously optimizing the utilization of computing resources. Finally, time complexity analysis is conducted to verify that the proposed approach can support real-time C-V2N services. | 10.1109/TNSM.2025.3570102 |
Qingwei Tang, Wei Sun, Zhi Liu, Yang Xiao, Qiyue Li, Xiaohui Yuan, Qian Zhang | Multi-agent Reinforcement Learning Based Delay and Power Optimization for UAV-WMN Substation Inspection | 2025 | Early Access | Inspection Network topology Optimization Autonomous aerial vehicles Topology Substations Stability analysis Delays Heuristic algorithms Real-time systems Multi-agent reinforcement learning Wireless mesh networks Neural network Lyapunov function RNN Substation inspection | Unmanned aerial vehicles (UAV), due to their flexibility and extensive coverage, have gradually become essential for substation inspections. Wireless mesh networks (WMN) provide a scalable and resilient network environment for UAVs, where each node can serve as either an access point or a relay point, thereby enhancing the network’s fault tolerance and overall resilience. However, the UAV-WMN combined system is complex and dynamic, facing the challenge of dynamically adjusting node transmission power to minimize end-to-end (E2E) delay while ensuring channel utilization efficiency. Real-time topology changes, high-dimensional state spaces, and large solution spaces make it difficult for traditional algorithms to guarantee convergence and stability. Generic reinforcement learning (RL) methods also struggle with stable convergence. This paper introduces a new Lyapunov function-based proof to address these issues and provide a stable condition for dynamic control strategies. Then, we developed a specialized neural network power controller and combined it with the MATD3 algorithm, effectively enhancing the system’s convergence and E2E performance. Simulation experiments validate the effectiveness of this method and demonstrate its superior performance in complex scenarios compared to other algorithms. | 10.1109/TNSM.2025.3558823 |
Yi-Han Xu, Rong-Xing Ding, Xiao-Ren Xu, Ding Zhou, Wen Zhou | A DDPG Hybrid with Graph Scheme for Resource Management in Digital Twin-assisted Biomedical Cyber-Physical Systems | 2025 | Early Access | Resource management Reliability Biomedical monitoring Wireless communication Optimization Body area networks Real-time systems Monitoring Energy efficiency Vehicle dynamics digital twins biomedical cyber-physical system WBANs resource management Markov decision process | In this paper, we focus on the resource optimization issue in Wireless Body Area Networks (WBANs) section of a novel Biomedical Cyber Physical System (BM-CPS) in which the Digital Twin (DT) technique is adopted concurrently. We propose a scenario where multiple physiological sensor nodes continuously monitor the electrophysiology signals from patients and connect to a Cyber Managing Center (Cyb-MC) in a wireless way to transmit the physiological indices to the paramedic reliably. Specifically, to optimize the energy efficiency of physiological sensors while ensuring the reliability of emergency-critical electrophysiology signals transmissions and modeling the uncertain stochastic environments, the optimization issue is transformed into a Markov Decision Process (MDP) in terms of joint transmission mode, relay assignment, transmission power and time slot scheduling consideration. Subsequently, we propose a Random Graph-enabled Deep Deterministic Policy Gradient (RG-DDPG) scheme to tackle the challenge while releasing computing complexity. In addition, as an innovative paradigm for reshaping cyberspace applications, the DT of WBAN is created to capture the time-varying resource status, where virtualized and intelligent resource management can be performed uniformly. Finally, extensive simulation studies verified the advancement of the proposed scheme and demonstrated that the DT can mirror the topology, predict the behavior, and administrate the resources of the WBANs. | 10.1109/TNSM.2025.3570252 |
Jinshui Wang, Yao Xin, Chongwu Dong, Lingfeng Qu, Yiming Ding | ERPC: Efficient Rule Partitioning Through Community Detection for Packet Classification | 2025 | Early Access | Decision trees Classification algorithms Partitioning algorithms Manuals Tuning Optimization Throughput IP networks Memory management Vectors Packet Classification rule partitioning decision tree community detection graph coloring | Packet classification is crucial for network security, traffic management, and quality of service by enabling efficient identification and handling of data packets. Decision tree-based rule partitioning has emerged as a prominent method in recent research. A significant challenge for decision tree algorithms is rule replication, which occurs when rules span multiple subspaces, leading to substantial memory consumption increases. Rule partitioning can effectively mitigate or eliminate this replication by separating overlapping rules. However, existing partitioning techniques heavily rely on manual parameter tuning across a wide range of possible values, making optimal solution discovery challenging. Furthermore, due to the lack of global optimization, these approaches face a critical trade-off: either the number of subsets becomes uncontrollable, resulting in diminished query speed, or rule replication becomes severe, causing substantial memory overhead. To bridge these gaps and achieve high-performance adaptive partitioning, we propose ERPC, a novel algorithm with the following key features: First, ERPC leverages graph theory to model rule sets, enabling global optimization that balances intra-group rule replication against the total number of groups. Second, ERPC advances rule set partitioning by modifying traditional community detection algorithms, strategically shifting the optimization objective from positive to negative modularity. Third, ERPC allows the rule set itself to determine the optimal number of groups, thus eliminating the need for manual parameter tuning. Experimental results demonstrate the efficacy of ERPC when applied to CutSplit, a state-of-the-art multi-tree method. It preserves 88% of CutSplit’s average classification throughput while reducing tree-building time by 89% and memory consumption by 77%. Furthermore, ERPC exhibits strong scalability, being adaptable to mainstream decision tree methods. | 10.1109/TNSM.2025.3567705 |
Dana Haj Hussein, Mohamed Ibnkahla | Towards Intelligent Intent-based Network Slicing for IoT Systems: Enabling Technologies, Challenges, and Vision | 2025 | Early Access | Internet of Things Surveys Resource management Translation Network slicing Ecosystems Data mining Autonomous networks Training Standardization Internet of Things Intent-based Networking Network Slicing Artificial Intelligent Machine Learning Resource Management Data Management | The rapid integration of intelligence and automation into future Internet of Things (IoT) systems, empowered by Intent-based Networking (IBN) and Network Slicing (NS) technologies, is transforming the way novel services are envisioned and delivered. The automation capabilities of IBN depend significantly on key facilitators, including data management and resource management. A robust data management methodology is essential for leveraging large-scale data, encompassing service-specific and network-specific data, enabling IBN systems to extract insights and facilitate real-time decision-making. Another critical enabler involves deploying intent-based mechanisms within an NS system that translate and ensure user intents by mapping them to precise Management and Orchestration (MO) commands. Nevertheless, data management in IoT systems faces significant security and operational challenges due to the diverse range of services and technologies involved. Furthermore, intent-based resource management demands intelligent proactive, and adaptive MO mechanisms that can fulfill a wide range of intent requirements. Existing surveys within the field have focused on technology-specific advancements, often overlooking these challenges. In response, this paper defines Intelligent Intent-Based Network Slicing (I-IBNS) systems exemplifying the integration of intelligent IBN and NS for the MO of IoT systems. Furthermore, the paper surveys I-IBNS systems, focusing on two critical domains: resource management and data management. The resource management segment examines recent developments in IBN mechanisms within an NS system. Meanwhile, the second segment explores data management complexities within IoT networks. Moreover, the paper envisions the roles of intent, NS, and the IoT ecosystem, thereby laying the foundation for future research directions. | 10.1109/TNSM.2025.3570052 |
Lingling Wang, Zhengyin Zhang, Mei Huang, Keke Gai, Jingjing Wang, Yulong Shen | RoPA: Robust Privacy-Preserving Forward Aggregation for Split Vertical Federated Learning | 2025 | Early Access | Vectors Protocols Privacy Training Data models Cryptography Resists Arithmetic Protection Federated learning Vertical federated learning Robust Privacy-preserving Integrity SNIP | Split Vertical Federated Learning (Split VFL) is an increasingly popular framework for collaborative machine learning on vertically partitioned data. However, it is vulnerable to various attacks, resulting in privacy leakage and robust aggregation issues. Recent works have explored the privacy protection of raw data samples and labels, neglecting malicious attacks launched by dishonest passive parties. Since they may deviate from the protocol and launch embedding poisoning attacks and free-riding attacks, it will inevitably result in model performance loss. To address this issue, we propose a Robust Privacy-preserving forward Aggregation (RoPA) protocol, which can resist embedding poisoning attacks and free-riding attacks and protect the privacy of embedding vectors. Specifically, we first present a modified Secret-shared Non-Interactive Proofs (SNIP) algorithm to guarantee the integrity verification of embedding vectors. To prevent free-riding attacks, we also give a validity verification protocol using matrix commitment. In particular, we utilize probability checking and batch verification to improve the verification efficiency of the protocol. Moreover, we adopt arithmetic secret sharing to protect data privacy. Finally, we conduct rigorous theoretical analysis to prove the security of RoPA and evaluate the performance of RoPA. The experimental results show that the proof verification overhead of RoPA is approximately 8× lower than the original SNIP, and the model accuracy is improved by ranging from 3% to 15% under the above two malicious attacks. | 10.1109/TNSM.2025.3569228 |
Jing Zhu, Dan Wang, Jiao Xing, Shuxin Qin, Gaofeng Tao, Pingping Chen, Zuqing Zhu | Bilevel Optimization for Provisioning Heterogeneous Traffic in Deterministic Networks | 2025 | Early Access | Optimization Bandwidth Routing Quality of service Bars Channel allocation Job shop scheduling IP networks Approximation algorithms Switches Deterministic networking Normal and deterministic traffic Bandwidth allocation routing and scheduling Bilevel optimization | Due to the capabilities of providing extremely low packet loss and bounded end-to-end latency, deterministic networking (DetNet) has been considered as a promising technology for emerging time-sensitive applications (e.g., industrial control and smart grids) in IP networks. To provide deterministic services, the operator needs to address the routing and scheduling problem. In this work, we study the problem from a novel prospective, i.e., the problem should be optimized not only for deterministic traffic, but also for normal traffic to coexist with the former. Specifically, we redefine the problem as bandwidth allocation, routing and scheduling (BaRS), and model this problem as a bilevel optimization which consists of an upper-level optimization and a lower-level optimization. The upper-level optimization allocates link bandwidth between deterministic and normal traffic to maximize the available bandwidth for normal traffic on the premise of accepting a certain portion of deterministic bandwidth; the lower-level optimization determines specific routing and scheduling solutions for deterministic traffic to maximize the number of accepted deterministic flows. We first formulate the bilevel optimization as a bilevel mixed integer linear programming (BMILP). Then, we propose an exact algorithm based on cutting planes to solve it exactly, and propose an approximation algorithm based on two-level relaxations and randomized rounding to solve it effectively and time-efficiently. Extensive simulations are conducted and the results verify the effectiveness of our proposals in balancing the tradeoff between the available bandwidth for normal traffic and the number of accepted deterministic flows. | 10.1109/TNSM.2025.3570284 |
Cristian Zilli, Alessio Sacco, Flavio Esposito, Guido Marchetto | ClearNET: Enhancing Transparency in Opaque Network Models using eXplainable AI (XAI) for Efficient Traffic Engineering | 2025 | Early Access | Computational modeling Data models Feature extraction Explainable AI Telemetry Predictive models Training Telecommunication traffic Routing Deep learning XAI traffic engineering machine learning | AI/ML has enhanced computer networking, aiding administrators in decision-making and automating tasks for optimized performance. Despite such advances in network automation, there remains limited trust in these uninterpretable models due to their inherent complexity. To this aim, eXplainable AI (XAI) has emerged as a critical area to demystify (deep) neural network models and to provide more transparent decision-making processes. While other fields have embraced XAI more prominently, the use of these techniques in computer network management remains largely unexplored. In this paper, we shed some light by presenting, an XAI-based approach designed to clarify the opaque nature of data-driven traffic engineering solutions in general, and efficient network telemetry, in particular. It does so by examining the intrinsic behavior of the adopted models, thereby reducing the volume of data needed for effective learning. Our extensive evaluation revealed how our approach not only reduces training time and overhead in network telemetry models but also maintains or improves model accuracy, leading, in turn, to more efficient and clear ML models for network management. | 10.1109/TNSM.2025.3567654 |
Hao Yang, Liangmin Guo, Taochun Wang, Chengmei Lv | A Novel Lightweight Dynamic Trust Evaluation Model for Edge Computing | 2025 | Early Access | Edge computing Computational modeling Collaboration Evaluation models Cloud computing Reliability Servers Accuracy Vehicle dynamics Heuristic algorithms Edge computing Truth discovery Dynamic trust Lightweight | The temporal decay of trust data in dynamic edge computing environments leads to inaccurate evaluation, and the recommended trust values from heterogeneous nodes are affected by subjective bias and are vulnerable to malicious attacks. To address these issues, this paper proposes a novel lightweight dynamic trust evaluation model. First, a time decay function is derived based on Newton’s law of cooling to effectively reflect the impact of trust timeliness on trust evaluation. On this basis, the real value is iteratively calculated and used as the recommended trust value through the truth discovery algorithm, enhancing the accuracy of trust evaluation. Then, the weight of the recommended trust value in the comprehensive trust value is determined based on the standard deviation of the weight of perceived data from each recommending node, balancing the influence of subjective and objective factors on trust evaluation results. Lastly, the comprehensive trust value is dynamically weighted, and an incentive mechanism is employed to update trust data based on feedback, reflecting the dynamic nature of trust. Theoretical analysis demonstrates that the model presented in this paper exhibits low time and space complexities, meeting the lightweight requirements of the edge computing environment. Experimental results indicate high recommendation trust accuracy and interaction success rates, as well as effective resistance against malicious attacks. | 10.1109/TNSM.2025.3570290 |
Haiyuan Li, Peizheng Li, Karcius Day Assis, Juan Marcelo Parra, Adnan Aijaz, Shuangyi Yan, Dimitra Simeonidou | NetMind+: Adaptive Baseband Function Placement With GCN Encoding and Incremental Maze-Solving DRL for Dynamic and Heterogeneous RANs | 2025 | Early Access | Baseband 5G mobile communication Power demand Training Resource management Costs Ultra reliable low latency communication Servers Routing Optimization Advanced RAN MEC deep reinforcement learning graph neural network incremental learning topology variation | The disaggregated architecture of advanced Radio Access Networks (RANs) with diverse X-haul latencies, in conjunction with resource-limited multi-access edge computing networks, presents significant challenges in designing a general model in placing baseband and user plane functions to accommodate versatile 5G services. This paper proposes a novel approach, NetMind+, which leverages Deep Reinforcement Learning (DRL) to determine the function placement strategies in diverse and evolving RAN topologies, aiming at minimizing power consumption. NetMind+ resolves the problem with a maze-solving strategy, enabling a Markov Decision Process with standardized action space scales across different networks. Additionally, a Graph Convolutional Network (GCN) based encoding and an incremental learning mechanism are introduced, allowing features from different and dynamic networks to be aggregated into a single DRL agent. This facilitates the generalization capability of DRL and minimizes the negative retraining impact. In an example with three sub-networks, NetMind+ demonstrates a substantial 32.76% improvement in power savings and a 41.67% increase in service stability compared to benchmarks from the existing literature. Compared to traditional methods necessitating a dedicated DRL agent for each network, NetMind+ attains comparable performance with 70% of the training cost savings. Furthermore, it demonstrates robust adaptability during network variations, accelerating training speed by 50%. | 10.1109/TNSM.2025.3570490 |
Shuaibing Lu, Xin Jin, Jie Wu, Shuyang Zhou, Jackson Yang, Ran Yan, Haiming Liu, Zhi Cai | Enhanced Multi-Stage Optimization of Dynamic QoS-Aware Service Caching and Updating in Mobile Edge Computing | 2025 | Early Access | Costs Servers Quality of service Delays Multi-access edge computing Optimization Dynamic scheduling Resource management Heuristic algorithms Computational modeling QoS-aware dynamic service caching service updating mobile edge computing | In the context of mobile edge computing, achieving dynamic service caching and updating to guarantee the QoS of users and reduce system costs is a challenging problem. However, existing research still has certain deficiencies in considering the dynamic behavior of users and the limited storage resources of edge servers. To address this problem, this paper investigates optimizing the service caching and updating problem within multi-stage and proposes a novel framework with three proposed strategies for the different stages to jointly optimize the delay and cost. At the initial service caching stage, we propose a basic caching strategy based on dynamic programming for the single-area scenario, taking into account the constraint of limited memory resources. To improve the caching strategy, we extend our consideration to the multiple-area scenario and design an improved algorithm based on tabu search. Given the dynamic behavior of users, we formulate the joint optimization problem as a Markov Decision Process (MDP) and design a service extension strategy based on reinforcement learning at the service updating decision-making stage and a replacement strategy taking both the distribution of service replications and service access frequency into account at the service updating replacement stage to guarantee the QoS of users. We effectively tackle the challenges arising from the dynamic behavior of users and limited storage resources. Through extensive comparative experiments, our approach outperforms traditional strategies by significantly reducing user latency and system cost. | 10.1109/TNSM.2025.3570427 |
Shengxiang Hu, Guobing Zou, Bofeng Zhang, Shaogang Wu, Shiyi Lin, Yanglan Gan, Yixin Chen | GACL: Graph Attention Collaborative Learning for Temporal QoS Prediction | 2025 | Early Access | Quality of service Feature extraction Ecosystems Predictive models Measurement Adaptation models Transformers Collaboration Tensors Market research Web Service Temporal QoS Prediction Dynamic User-Service Invocation Graph Target-Prompt Graph Attention Network User-Service Temporal Feature Evolution | Accurate prediction of temporal QoS is crucial for maintaining service reliability and enhancing user satisfaction in dynamic service-oriented environments. However, current methods often neglect high-order latent collaborative relationships and fail to dynamically adjust feature learning for specific user-service invocations, which are critical for precise feature extraction within each time slice. Moreover, the prevalent use of RNNs for modeling temporal feature evolution patterns is constrained by their inherent difficulty in managing long-range dependencies, thereby limiting the detection of long-term QoS trends across multiple time slices. These shortcomings dramatically degrade the performance of temporal QoS prediction. To address the two issues, we propose a novel Graph Attention Collaborative Learning (GACL) framework for temporal QoS prediction. Building on a dynamic user-service invocation graph to comprehensively model historical interactions, it designs a target-prompt graph attention network to extract deep latent features of users and services at each time slice, considering implicit target-neighboring collaborative relationships and historical QoS values. Additionally, a multi-layer Transformer encoder is introduced to uncover temporal feature evolution patterns, enhancing temporal QoS prediction. Extensive experiments on the WS-DREAM dataset demonstrate that GACL significantly outperforms state-of-the-art methods for temporal QoS prediction across multiple evaluation metrics, achieving the improvements of up to 38.80%. | 10.1109/TNSM.2025.3570464 |