Last updated: 2025-07-04 03:01 UTC
All documents
Number of pages: 142
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Kun Lan, Gaolei Li, Wenkai Huang, Jianhua Li | HFL-RD: Heterogeneous Federated Learning-Empowered Ransomware Detection via APIs and Traffic Features | 2025 | Early Access | Ransomware Encryption Market research Federated learning Feature extraction Cryptography Telecommunication traffic Servers Organizations Monitoring Ransomware detection heterogeneous federated learning convolutional neural networks | Ransomware has evolved into a more organized attack threat with stronger anti detection and analysis capabilities, resulting in significant global losses. However, traditional methods separate the external and internal behaviors of ransomware infiltration into attack targets, making it difficult to discover the complex and covert evolution and iteration characteristics of advanced ransomware. The main contribution of this study lies in three aspects: a) The integration of Command-and-control (C&C) traffic behavior analysis and local API call operation analysis can effectively discern and capture the concealed characteristics of ransomware; b) The non-IID problem in aggregating ransomware features using federated learning can be resolved using dynamic regularization methods and penalty terms; c) By preprocessing the original data of ransomware traffic through one-dimensional convolution, the structural characteristics of network traffic in the process of attack operation can be retained to the greatest extent. Comprehensive experiments are conducted to validate the effectiveness of this model, specifically, the heterogeneous federated learning-empowered ransomware detection (HFL-RD) scheme outperformed existing methods, the experimental dataset gathered runnable ransomware from three public websites, including 300 ransomware samples from 30 families and 200 benign software samples from 7 categories. HFL-RD obtained a high accuracy over 95%. In terms of detecting unknown ransomware variants, it has demonstrated superior detection capabilities in terms of detection time and number of file corruption. | 10.1109/TNSM.2025.3574716 |
Qingwei Tang, Wei Sun, Zhi Liu, Yang Xiao, Qiyue Li, Xiaohui Yuan, Qian Zhang | Multi-agent Reinforcement Learning Based Delay and Power Optimization for UAV-WMN Substation Inspection | 2025 | Early Access | Inspection Network topology Optimization Autonomous aerial vehicles Topology Substations Stability analysis Delays Heuristic algorithms Real-time systems Multi-agent reinforcement learning Wireless mesh networks Neural network Lyapunov function RNN Substation inspection | Unmanned aerial vehicles (UAV), due to their flexibility and extensive coverage, have gradually become essential for substation inspections. Wireless mesh networks (WMN) provide a scalable and resilient network environment for UAVs, where each node can serve as either an access point or a relay point, thereby enhancing the network’s fault tolerance and overall resilience. However, the UAV-WMN combined system is complex and dynamic, facing the challenge of dynamically adjusting node transmission power to minimize end-to-end (E2E) delay while ensuring channel utilization efficiency. Real-time topology changes, high-dimensional state spaces, and large solution spaces make it difficult for traditional algorithms to guarantee convergence and stability. Generic reinforcement learning (RL) methods also struggle with stable convergence. This paper introduces a new Lyapunov function-based proof to address these issues and provide a stable condition for dynamic control strategies. Then, we developed a specialized neural network power controller and combined it with the MATD3 algorithm, effectively enhancing the system’s convergence and E2E performance. Simulation experiments validate the effectiveness of this method and demonstrate its superior performance in complex scenarios compared to other algorithms. | 10.1109/TNSM.2025.3558823 |
Hao Yin, Changling Zhou, Weiping Wen, Yiwei Liu | Fountain: DAG-Based Separate BFT Consensus Made Hashgraph Practical | 2025 | Early Access | Blockchains Consensus protocol Consensus algorithm Synchronization Indexes Throughput Symbols Directed acyclic graph Data mining Training Blockchain Parallel Chains Hashgraph Consensus Algorithm Low Latency | The limited transaction throughput performance is the primary challenge for single-chain structure blockchain in practical applications. Directed acyclic graph (DAG) technology offers a novel topological structure, significantly improving blockchain’s ability to process massive transactions by containing numerous blocks in a parallel way. However, the existing parallel-chain schemes embed DAG into the Practical Byzantine Fault Tolerance (PBFT) protocol stages to expand the instance, which requires an extent of overall synchronization between multiple chains. Overall consensus synchronously among all parallel chains may delay the system’s progress, particularly when the graph structure is unevenly distributed. This paper introduces a new DAG-based blockchain named Fountain, which aims to provide a loosely coupled consensus algorithm based on the classic parallel-chain scheme Hashgraph. The Fountain scheme changes consensus finality from an overall behavior to a separate behavior, thereby reducing block commit latency from the perspective of each parallel chain. Theoretical analysis reveals that our scheme maintains the same security level as Hashgraph, while experimental simulations demonstrate its effectiveness in optimizing commit latency. | 10.1109/TNSM.2025.3576128 |
Zhang Sheng, Liangliang Song, Yanbin Wang | Dynamic Feature Fusion: Combining Global Graph Structures and Local Semantics for Blockchain Phishing Detection | 2025 | Early Access | Blockchains Feature extraction Semantics Fraud Phishing Data models Data mining Time series analysis Robustness Accuracy Blockchain Fraud Detection Multimodal Fusion Security | The advent of blockchain technology has facilitated the widespread adoption of smart contracts in the financial sector. However, current phishing detection methodologies exhibit limitations in capturing both global structural patterns within transaction networks and local semantic relationships embedded in transaction data. Most existing models focus on either structural information or semantic features individually, leading to suboptimal performance in detecting complex phishing patterns. In this paper, we propose a dynamic feature fusion model that combines graph-based representation learning and semantic feature extraction for blockchain phishing detection. Specifically, we construct global graph representations to model account relationships and extract local contextual features from transaction data. A dynamic multimodal fusion mechanism is introduced to adaptively integrate these features, enabling the model to capture both structural and semantic phishing patterns effectively. We further develop a comprehensive data processing pipeline, including graph construction, temporal feature enhancement, and text preprocessing. Experimental results on large-scale real-world blockchain datasets demonstrate that our method outperforms existing benchmarks across accuracy, F1 score, and recall metrics. This work highlights the importance of integrating structural relationships and semantic similarities for robust phishing detection and offers a scalable solution for securing blockchain systems. Our code is available at https://github.com/dcszhang/DynamicFeature. | 10.1109/TNSM.2025.3576130 |
Alexandros Papadopoulos, Dimitrios Tyrovolas, Antonios Lalas, Konstantinos Votis, Stefan Schmid, Sotiris Ioannidis, George K. Karagiannidis, Christos K. Liaskos | On Modeling the RIS as a Resource: Multi-User Allocation and Efficiency-Proportional Pricing | 2025 | Early Access | Reconfigurable intelligent surfaces Resource management Pricing Artificial intelligence Radio access networks Training Software Real-time systems Network slicing Game theory RIS resource allocation multiplexing pricing | Programmable Wireless Environments aim to render the communication environment a controllable, software-defined medium. Reconfigurable Intelligent Surfaces (RISes) are the key enabling technology, which can offer the real-time capability to manipulate impinging waves. RISes are expected to be widely deployed in B5G/6G networks to serve a large number of users simultaneously. Despite numerous analyses highlighting the benefits of utilizing previously unexploitable propagation factors through the use of RISes, there is a lack of analysis regarding their relation to the concept of network resource, their allocation to users/stakeholders and their fair pricing. Thus, this paper models RISes as networked resources. Based on this definition, the PRIME algorithm is proposed, the first algorithm for RIS resource allocation and joint pricing. PRIME strives for proportionality between the offered end-user performance level and the corresponding resource pricing, promoting fairness. The algorithm is validated via full-wave electromagnetic simulations and applies to multiple RIS functionalities and frequency bands. | 10.1109/TNSM.2025.3576038 |
Md Shahbaz Akhtar, Mohit Kumar, Md Iftekhar Alam, Aneek Adhya | XGS-PON-Standard Compliant DBA Algorithm for Option 7.x Functional Split-Based 5G C-RAN | 2025 | Early Access | Bandwidth Delays Optical network units Resource management Prediction algorithms Heuristic algorithms Channel allocation 5G mobile communication Throughput Passive optical networks 5G C-RAN fronthaul Dynamic bandwidth allocation Functional splitting XGS-PON | A 10-Gigabit Capable Symmetrical Passive Optical Network (XGS-PON) is considered as a cost-efficient fronthaul network solution for the Fifth Generation (5G) Centralized Radio Access Network (C-RAN). However, meeting the stringent latency requirements of C-RAN fronthaul with XGS-PON is challenging, as its upstream capacity is shared in the time-domain, and Dynamic Bandwidth Allocation (DBA) mechanism is employed to manage upstream traffic. The major issue with conventional DBA algorithms is that data arriving in the Optical Network Unit (ONU) buffer must wait for at least one DBA cycle before being scheduled, leading to poor delay performance. To address this, we propose a novel DBA algorithm named Traffic Prediction-based Enhanced Residual Bandwidth Utilization (TP-ERBU) that integrates a traffic prediction mechanism with enhanced residual bandwidth utilization to optimize delay performance in Option 7.x functional split-based C-RAN fronthaul over XGS-PON. The algorithm predicts future traffic to reduce delays in ONUs and reallocates residual bandwidth from lightly loaded ONUs to heavily loaded ones. Additionally, we develop an XGS-PON-based C-RAN simulation module named xCRAN-SimModule, using the OMNeT++ network simulator. Simulation results demonstrate that TP-ERBU improves packet delay by 20.59%, upstream channel utilization by 38.33%, packet loss by 25.00%, jitter by 5.71%, and throughput by 15.56% compared to existing algorithms. | 10.1109/TNSM.2025.3575938 |
Panagiotis Michalopoulos, Odunayo Olowookere, Nadia Pocher, Johannes Sedlmeir, Andreas Veneris, Poonam Puri | Privacy and Compliance Design Options in Offline Central Bank Digital Currencies | 2025 | Early Access | Privacy Security Online banking Random access memory Hardware Microprocessors Memory management User experience Training Software Anonymity CBDC compliance by design offline payments privacy secure computation secure hardware | Many central banks are researching and piloting digital versions of fiat money, specifically retail central bank digital currencies (CBDCs). Core to many discussions revolving around these systems’ design is the ability to perform transactions even without network connectivity. While this approach is generally believed to provide additional degrees of freedom for user privacy, the lack of direct involvement of third parties in these offline transfers also interferes with key regulatory requirements that need to be accommodated in the financial space. This paper presents a compliance-by-design approach to evaluate technologies that can balance privacy with anti-money laundering and counter-terrorism financing (AML/CFT) measures. It classifies privacy design options and corresponding technical building blocks for offline CBDCs, along with their impact on AML/CFT measures, and outlines commonalities and differences between offline and online solutions. As such, it provides a conceptual framework for further techno-legal assessments and implementations. | 10.1109/TNSM.2025.3575367 |
Numidia Zaidi, Sylia Zenadji, Mohamed Amine Ouamri, Daljeet Singh, F. Hamida Abdelhak | CARRAS: Combined AOA and RBFNN for Resource Allocation in Single Site 5G Network | 2025 | Early Access | Resource management Throughput 5G mobile communication Artificial neural networks Optimization Wireless communication Signal to noise ratio Quality of service Delays Wireless networks Resource allocation 5G Archimedes Optimization Algorithm Radial Basis Function Neural Network Throughput | 5G and future networks must manage flows with varying Quality of Service (QoS) requirements, even under unpredictable traffic conditions. As user requirements for network capacity evolve over time, it is crucial to allocate resources appropriately to maximize the efficiency of their application. Consequently, these demands are driving the creation of new resource management policies, as conventional methods are no longer sufficient to meet them effectively. Thus, we propose a framework for resource allocation at the radio access network (RAN) level, while taking into consideration throughput and delay probability. To solve the formulated problem and to make it more tractable, the archimedes optimization algorithm (AOA) combined with an artificial neural network (ANN) is introduced to explore the search space and find optimal solutions. Nevertheless, in a 5G environment, the interactions between users, services, and network resources are inherently complex and non-linear. To this end, the radial basis function (RBF) is then used to predict user needs and reallocate resources according to expected results. The simulation results show that the proposed approach has a significant advantage over traditional approaches such as Particle Swarm Optimization (PSO). To the best of our knowledge, this paper is the first attempt to study 5G resource allocation using a combination of AOA and RBFNN algorithms and adequately describes the approach. | 10.1109/TNSM.2025.3573797 |
Jaime Galán-Jiménez, Marco Polverini, Juan Luis Herrera, Francesco G. Lavacca, Javier Berrocal | ELTO: Energy Efficiency-Load balancing Trade-Off Solution to Handle With Conflicting Metrics in Hybrid IP/SDN Scenarios | 2025 | Early Access | Switches Energy consumption Energy efficiency Load management IP networks Routing Control systems Optimization Heating systems Telecommunication traffic Load balancing energy efficiency IP SDN ILP | Next-generation applications, marked by their critical nature, need to cope with stringent Quality of Service (QoS) requirements, such as low response time and high throughput. Moreover, the increasing number of devices connected to the Internet and the need to provide a consistent network infrastructure to serve the applications requested by users, open the tradeoff of jointly considering the QoS improvement for such applications and the reduction in the energy consumption of the infrastructure. To address this challenge, this paper proposes ELTO (Energy-Load Trade-Off), a system designed for the joint optimization of energy efficiency and traffic load balancing during the transition from IP networks to Software-Defined Networks (SDN). Leveraging SDN and Network Function Virtualization (NFV) paradigms, ELTO introduces an Integer Linear Programming multi-objective formulation, and a Genetic Algorithm heuristic to tackle the optimization problem in large-scale scenarios. ELTO encompasses a holistic approach to network configuration, including network equipment status and routing, to strike a balance between network traffic load balancing and energy efficiency. Results over realistic topologies show the effectiveness of the proposed solution, outperforming other state-of-the-art approaches, being able to switch off nearly half of the links in the network while also reducing the Maximum Link Utilization. | 10.1109/TNSM.2025.3559422 |
Yaoxu He, Hongyan Li, Peng Wang | Enhancing Throughput for TTEthernet via Co-optimizing Routing and Scheduling: An Online Time-Varying Graph-based Method | 2025 | Early Access | Routing Job shop scheduling Dynamic scheduling Schedules Standards Delays Vehicle dynamics Optimization Throughput Resource management Time-Triggered Ethernet online joint routing and scheduling time-slot expanded graph dynamic weighting | Time-Triggered Ethernet (TTEthernet) has been widely applied in many scenarios such as industrial internet, automotive electronics, and aerospace, where offline routing and scheduling for TTEthernet has been largely investigated. However, predetermined routes and schedules cannot meet the demands in some agile scenarios, such as smart factories, autonomous driving, and satellite network switching, where the transmission requests join in and leave the network frequently. Thus, we study the online joint routing and scheduling problem for TTEthernet. However, balancing efficient and effective routing and scheduling in an online environment can be quite challenging. To ensure high-quality and fast routing and scheduling, we first design a time-slot expanded graph (TSEG) to model the available resources of TTEthernet over time. The fine-grained representation of TSEG allows us to select a time slot via selecting an edge, thus transforming the scheduling problem into a simple routing problem. Next, we design a dynamic weighting method for each edge in TSEG and further propose an algorithm to co-optimize the routing and scheduling. Our scheme enhances the TTEthernet throughput by co-optimizing the routing and scheduling to eliminate potential conflicts among flow requests, as compared to existing methods. The extensive simulation results show that our scheme runs >400 times faster than standard solutions (i.e., ILP solver), while the gap is only 2% to the optimally scheduled number of flow requests. Besides, as compared to existing schemes, our method can improve the successfully scheduled number of flows by more than 18%. | 10.1109/TNSM.2025.3576578 |
Elham Amini, Jelena Miši, Vojislav B. Miši | Paxos With Priorities for Blockchain Applications | 2025 | Early Access | Proposals Protocols Voting Fault tolerant systems Fault tolerance Consensus algorithm Consensus protocol Computer crashes Queueing analysis Delays Paxos consensus preemptive queues with priorities blockchain technology decentralized consensus aging mechanism | Paxos is a well known protocol for state machine replication and consensus in face of crash faults. However, it suffers from inefficiencies in request handling, particularly in scenarios requiring preemptive prioritization. To address this, we propose a priority-aware extension similar to MultiPaxos and evaluate its performance using a queuing model, and show the improvement in performance metrics such as mean completion and waiting times. Our results demonstrate that integrating prioritization mechanisms into Paxos reduces latency for high-priority requests while ensuring fairness. The aging-based approach maintains correctness of the consensus process while adding flexibility to manage time-sensitive distributed applications such as permissioned blockchains. | 10.1109/TNSM.2025.3574581 |
Nicola Di Cicco, Gaetano Francesco Pittalà, Gianluca Davoli, Davide Borsatti, Walter Cerroni, Carla Raffaelli, Massimo Tornatore | Scalable and Energy-Efficient Service Orchestration in the Edge-Cloud Continuum With Multi-Objective Reinforcement Learning | 2025 | Early Access | Energy consumption Training Resource management Servers Computational modeling Optimization Scalability Delays Numerical models Energy efficiency Service Orchestration Edge-Cloud Continuum Multi-Objective Reinforcement Learning Energy Profiling | The Edge-Cloud Continuum represents a paradigm shift in distributed computing, seamlessly integrating resources from cloud data centers to edge devices. However, orchestrating services across this heterogeneous landscape poses significant challenges, as it requires finding a delicate balance between different (and competing) objectives, including service acceptance probability, offered Quality-of-Service, and network energy consumption. To address this challenge, we propose leveraging Multi-Objective Reinforcement Learning (MORL) to approximate the full Pareto Front of service orchestration policies. In contrast to conventional solutions based on single-objective RL, a MORL approach allows a network operator to inspect all possible “optimal” trade-offs, and then decide a posteriori on the orchestration policy that best satisfies the system’s operational requirements. Specifically, we first conduct an extensive measurement study to accurately model the energy consumption of heterogeneous edge devices and servers under various workloads, alongside the resource consumption of popular cloud services. Then, we develop a set-based MORL policy for service orchestration that can adapt to arbitrary network topologies without the need for retraining. Illustrative numerical results against selected heuristics show that our MORL policy outperforms baselines by 30% on average over a broad set of objective preferences, and generalizes to network topologies up to 5x larger than training. | 10.1109/TNSM.2025.3574131 |
Jiasong Li, Yunhe Cui, Yi Chen, Guowei Shen, Chun Guo, Qing Qian | The DUDFTO Attack: Towards Down-to-UP Timeout Probing and Dynamically Flow Table Overflowing in SDN | 2025 | Early Access | Probes Heuristic algorithms Inference algorithms Accuracy Delays Statistical analysis Interference Streams Process control Training SDN switchs flow table overflow attack information probing | As a new network structure, the decoupling of the control plane and forwarding plane makes Software-Defined Networking (SDN) widely used in large-scale network scenarios. However, the decoupling network architecture also brings new vulnerabilities. The flow table overflow attack is an attack strategy that can overwhelm SDN switches. Nevertheless, the existing flow table overflow attacks may fail in probing timeouts and match fields of flow entries, due to link failure, measurement of the round-trip time (RTT) of different packets, interference of hard-timeout and idle-timeout. Meanwhile, the stealthiness of the existing attacks may also reduce, as these attacks use fixed attack rate. To improve the timeout probing accuracy and the stealthiness of attack, a new flow table overflow attack strategy, DUDFTO, is proposed to accurately probe timeout settings and match fields, then stealthily overflow SDN flow tables. Firstly, it probes the match fields by measuring the one-sided transmission delay of the packets. After that, DUDFTO designs a down-to-up feedback-based timeout probing algorithm to eliminate the issues caused by high RTT, link failure, interference between hard-timeout and idle-timeout. Then, DUDFTO designs a dynamic attack packets sending algorithm to improve its stealthiness. Finally, DUDFTO probes the flow table state to stop sending new attack packets. The evaluation results demonstrate that DUDFTO outperforms the existing attacks in terms of match fields probing ability, timeout probing relative error, number of packet_in and flow_mod messages generated by the attack, rate distribution of packet_in and flow_mod messages generated during the attack, and number of detected attack packets. | 10.1109/TNSM.2025.3574260 |
Soosan Naderi Mighan, Jelena Mišić, Vojislav B. Mišić | Probabilistic Analysis of Validator Lifecycle and Fork Resolution in Ethereum 2.0-Like PoS System | 2025 | Early Access | Blockchains Delays Proposals Protocols Probabilistic logic Peer-to-peer computing Economics Proof of stake Analytical models Consensus protocol Ethereum 2.0 proof of stake (PoS) consensus forking performance analysis | Ethereum 2.0 uses a Proof-of-Stake-based consensus which aims to minimize the impact of malicious validators by decentralizing the voting protocol. In this paper we investigate the lifecycle of a validator in a consensus protocol similar to Ethereum 2.0 but with simplifications introduced for tractability. In particular, the protocol operates with near-single slot finality and includes the impact of behaviors such as truthful and false voting, abstention from voting, voluntary exit from the validator committee, and return to the committee upon depositing the required stake. Using probabilistic techniques and a Markov chain model, we examine the impact of all those factors on consensus probability. Our results indicate that the probability of truthful voting has a predominant effect on consensus, although the interplay between probabilities of voluntary exit and waiting before returning to the committee also plays an important role. We also investigate the process of fork resolution and model the behavior of the blockchain in the presence of multiple tips, and we show that probability of truthful voting is equally important in this case as higher values accelerate fork resolution. | 10.1109/TNSM.2025.3573246 |
Yuan He, Yaqun Liu, Jun Xie | A Distributed Approach for User Association and UAV Deployment in QoE-aware Multi-UAV Networks | 2025 | Early Access | Quality of experience Wireless networks Training Throughput Heuristic algorithms Autonomous aerial vehicles Optimization Distributed algorithms Games Base stations UAV-aided wireless network user association deployment design QoE optimization | Using unmanned aerial vehicles (UAVs) as aerial base stations (BSs) has gained increasing attention recently. In this paper, we investigate the problems of ground user (GU) association and UAV-BS deployment in a multi-UAV assisted wireless network using a distributed approach, aiming to maximize the total quality of experience (QoE) of GUs. Since the two problems are coupled to each other, we adopt an alternating optimization approach to optimize the GU association and UAV-BS deployment alternately. We model the GU association problem as a coalition formation game (CFG), and a distributed algorithm based on switch rule and swap rule is proposed to solve the stable coalition partition of the GU association CFG. Besides, we propose a distributed algorithm based on particle swarm optimization (PSO) algorithm to solve the deployment position of each UAV-BS. The proposed algorithms can run online on UAV-BSs in a distributed manner. We evaluate the performance of the proposed algorithms through simulation experiments. Simulation results demonstrate that the proposed algorithms can achieve a high total QoE of GUs in limited number of iterations. | 10.1109/TNSM.2025.3572890 |
Yasaman Haghbin, Mohammad Hossein Badiei, Nguyen H. Tran, Md. Jalil Piran | Resilient Federated Adversarial Learning With Auxiliary-Classifier GANs and Probabilistic Synthesis for Heterogeneous Environments | 2025 | Early Access | Data models Training Internet of Things Computational modeling Servers Synthetic data Robustness Resilience Probabilistic logic Generative adversarial networks Auxiliary Federated learning Adversarial robustenss Auxiliary classifier generative adversarial network Heterogeneous Internet of Things Probabilistic data synthesis | Recently, collaborative learning paradigms like Federated Learning (FL) are gaining significant attention as a means of deploying artificial intelligence (AI)-based Internet of Things (IoT) applications. This is due to the fact that participants keep their heterogeneous data on their local devices and share only model updates with the central server. As a result of FL, new challenges arise, such as vulnerabilities to unknown data and adversarial samples, as well as security risks associated with inference, which may expose the system to potential evasion attacks. In this article, we introduce Auxiliary Federated Adversarial Learning (AuxiFed) as a solution to address these serious challenges. AuxiFed synthesizes data by using pre-trained auxiliary-classifier generative adversarial networks (AC-GANs) and probabilistic logic, enhancing model resilience and promoting accurate predictions while safeguarding against adversarial attacks. By leveraging locally trained models, AuxiFed provides representative and diverse synthetic samples for model updates during FL based on the pre-trained AC-GAN generators of individual clients. By merging these synthetic samples with real data during training, we foster diversity of data and improve the model’s ability to generalize to unknown data. In two distinct environments, with homogeneous and heterogeneous data, we train the model on two datasets, MNIST and EMNIST. Different adversarial evasion attacks are tested, as well as scenarios without attacks. The AuxiFed algorithm is also bolstered using robust adversarial techniques, and subsequently compared with the baseline algorithms. AuxiFed generally outperforms Federated Averaging (FedAvg), FL with Variational Autoencoders (FedAvg+VAEs), and FL with Conditional Generative Adversarial Networks (FedAvg+C-GANs) in terms of accuracy, generalization, and robustness. Comparatively to baseline methods, including FedAvg, FedAvg+VAE, and FedAvg+C-GAN, it shows better convergence during training and better performance on unknown data. Various adversarially trained variants of AuxiFed, such as AuxiFed-PGD and AuxiFed-FGSM, also outperform the previously mentioned baseline methods, along with their robust variants. As a result, AuxiFed enhances the performance of models, provides resilience against adversarial attacks, and can generalize to unknown data. | 10.1109/TNSM.2025.3571688 |
Alessandro Tundo, Federica Filippini, Francesco Regonesi, Michele Ciavotta, Marco Savi | Decentralized Edge Workload Forecasting With Gossip Learning | 2025 | Early Access | Peer-to-peer computing Computational modeling Forecasting Training Biological system modeling Adaptation models Edge computing Robustness Protocols Data models Edge Computing Workload Forecasting Gossip Learning Function-as-a-Service Machine Learning | Edge computing has emerged as a crucial paradigm for addressing the growing demands of interconnected devices and large-scale mobile applications by relocating computation and storage services closer to end-users. Edge workloads are inherently volatile and challenging to forecast due to their dependence on factors such as human mobility patterns and geographically-distributed infrastructure, combined with the dynamic nature of edge nodes. Traditional centralized approaches to workload forecasting are inadequate in the context of decentralized and failure-prone edge environments. To address this challenge, this paper investigates workload forecasting using Gossip Learning (GL), an asynchronous peer-to-peer learning protocol. GL allows for the training of forecasting models in a fully-decentralized manner, thereby mitigating single point of failure risks and enhancing overall system robustness. We extended the original protocol across multiple dimensions to improve convergence, reduce communication overhead, and enhance resilience to failures. We evaluated the proposed approach through extensive simulations; the obtained results demonstrate its effectiveness with respect to classical methods, rendering it a promising solution to enhance load balancing and task offloading strategies at the edge, thereby ensuring Quality-of-Service (QoS) and reducing Service Level Agreement (SLA) violations. | 10.1109/TNSM.2025.3570450 |
Shengxiang Hu, Guobing Zou, Bofeng Zhang, Shaogang Wu, Shiyi Lin, Yanglan Gan, Yixin Chen | GACL: Graph Attention Collaborative Learning for Temporal QoS Prediction | 2025 | Early Access | Quality of service Feature extraction Ecosystems Predictive models Measurement Adaptation models Transformers Collaboration Tensors Market research Web Service Temporal QoS Prediction Dynamic User-Service Invocation Graph Target-Prompt Graph Attention Network User-Service Temporal Feature Evolution | Accurate prediction of temporal QoS is crucial for maintaining service reliability and enhancing user satisfaction in dynamic service-oriented environments. However, current methods often neglect high-order latent collaborative relationships and fail to dynamically adjust feature learning for specific user-service invocations, which are critical for precise feature extraction within each time slice. Moreover, the prevalent use of RNNs for modeling temporal feature evolution patterns is constrained by their inherent difficulty in managing long-range dependencies, thereby limiting the detection of long-term QoS trends across multiple time slices. These shortcomings dramatically degrade the performance of temporal QoS prediction. To address the two issues, we propose a novel Graph Attention Collaborative Learning (GACL) framework for temporal QoS prediction. Building on a dynamic user-service invocation graph to comprehensively model historical interactions, it designs a target-prompt graph attention network to extract deep latent features of users and services at each time slice, considering implicit target-neighboring collaborative relationships and historical QoS values. Additionally, a multi-layer Transformer encoder is introduced to uncover temporal feature evolution patterns, enhancing temporal QoS prediction. Extensive experiments on the WS-DREAM dataset demonstrate that GACL significantly outperforms state-of-the-art methods for temporal QoS prediction across multiple evaluation metrics, achieving the improvements of up to 38.80%. | 10.1109/TNSM.2025.3570464 |
Chang Xing, Ronald G. Addie, Moshe Zukerman | GoS-Aware Optimization of a Multi-Layered Network for Cost Effectiveness and Fault Tolerance | 2025 | Early Access | Fault tolerant systems Fault tolerance Costs Optimization Resource management Heuristic algorithms Bit rate Streams Routing Resilience Integer Linear Programming Network Fault-tolerance Multi-layered Network Network Optimization Variable Bit Rate Traffic | This paper introduces two new algorithms for fault-tolerant design of multi-layered networks, both of which extend the previously published multi-layered market algorithm (MMA), by including provision of additional resources to be used during network failure events. The new algorithms are called resilient MMA (RMMA) and failure-traffic MMA (FTMMA). RMMA runs MMA iteratively and independently for each failure scenario. FTMMA treats each failure event as a type of traffic, which enables more efficient sharing of network resources. Both RMMA and FTMMA consider a range of single physical link failures and aim to maximize earnings before interest and tax (EBIT). The costs considered in the EBIT evaluation include amortized capital and operational expenditures and penalties (compensation to the customers when the service is degraded). They both focus on optimizing resource provisioning, in particular, capacity assignment, for fault-tolerant and cost-effective design of multi-layered networks. The novel aspects of RMMA and FTMMA include the incorporation of variable bit rate traffic streams in fault-tolerant multilayered network design, together with the aim to maximize EBIT. RMMA and FTMMA are validated by comparing designs with those produced by an integer linear programming benchmark for small-size networks. Numerical results show that FTMMA can more efficiently allocate capacity for failures by sharing these resources across different failure events. | 10.1109/TNSM.2025.3577567 |
Kashif Mehmood, Katina Kralevska, David Palma | Knowledge-Driven Intent Life-Cycle Management for Cellular Networks | 2025 | Early Access | Knowledge graphs Translation Cellular networks Stakeholders Optimization Ontologies 5G mobile communication Resource description framework Monitoring Quality of service IBN closed-loop control service model knowledge graph learning service and network management optimization | The management of cellular networks and services has evolved due to the rapidly changing demands and complexity of service modeling and management. This paper uses intent-based networking (IBN) as a solution and couples it with contextual information from knowledge graphs (KGs) of network and service components to achieve the objective of service orchestration in cellular networks. Fusing IBN with KGs facilitates an intelligent, flexible, and resilient service orchestration process.We propose an intent completion approach using knowledge graph learning and a mapping model capable of inferring and validating the service intents in the network. Subsequently, these service intents are deployed using available network resources in a simulated fifth generation (5G) non-standalone (NSA) network. The compliance of the deployed intents is monitored, and mutual optimization against their required service key performance indicators is performed using Simultaneous Perturbation Stochastic Approximation (SPSA) and Multiple Gradient Descent Algorithm (MGDA). The numerical results show that the knowledge graph with Gaussian embedding (KG2E) model outperforms other distance-based embedding models for the proposed service KG. Different combinations of strict latency (SL) and non-strict latency (NSL) intents are deployed, and compliance is evaluated for increasing numbers of deployed intents against baseline deployment scenarios. The results show a higher level of compliance for SL intents to target latencies in comparison to NSL intents for the proposed intent deployment and optimization algorithm. | 10.1109/TNSM.2025.3579547 |