Last updated: 2025-04-26 03:01 UTC
All documents
Number of pages: 138
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Shoya Imanaka, Akio Kawabata, Bijoy Chand Chatterjee, Eiji Oki | Distributed Server Allocation for Internet-of-Things Monitoring Services With Preventive Start-Time Optimization Against Server Failure | 2025 | Early Access | Servers Resource management Internet of Things Delays Monitoring Fault tolerant systems Fault tolerance Optimization Data communication Databases server allocation problem Internet of things monitoring service preventive start-time optimization polynomial-time algorithm | Internet-of-Things (IoT) services require high performance regarding low delay and fault tolerance. Distributed server allocation is well-suited for meeting these requirements in IoT monitoring services. Previous work focused on reducing delay but overlooked the need for fault tolerance in distributed server allocation. This paper proposes a distributed server allocation model based on preventive start-time optimization (PSO) for IoT monitoring services against server failure. The proposed model preventively determines the server allocation to minimize the largest maximum delay between IoT devices and application servers and between database and application servers among all failure patterns. We formulate the proposed model as an integer linear programming (ILP) problem. We introduce a server allocation algorithm based on PSO to accelerate the computation to obtain an optimal server allocation, compared to the ILP approach. We prove that the introduced algorithm obtains a PSO-based optimal allocation in polynomial time. Numerical results show that the introduced algorithm outputs an optimal server allocation faster than the ILP approach. We compare the PSO-based server allocation with allocations based on the start-time and run-time optimization. We observe that the PSO-based allocation reduces the largest maximum delay by 5.5% for a network model with eleven servers compared to the start-time optimization and avoids unnecessary network disconnections while increasing the maximum delay by 5.1% compared to the run-time optimization. | 10.1109/TNSM.2025.3555277 |
Kai Peng, Jialu Guo, Hao Wang, Jintao He, Zhiqing Zou, Tianping Deng, Menglan Hu | Delay-Aware Joint Microservice Deployment and Request Routing in Multi-Edge Environments Based on Reinforcement Learning | 2025 | Early Access | Microservice architectures Routing Cloud computing Optimization Delays Vehicle-to-everything Servers Training Resource management Containers Artificial intelligence and machine learning cloud computing services mobile edge computing microservice deployment request routing | The service modules of the traditional Mobile Edge Computing (MEC) are difficult to deploy, extend, and maintain in real networks because of the highly sophisticated systems. To promote the generalization, openness, and flexibility of the network edge environment, an increasing number of studies are exploring the integration of microservices with MEC. However, the existing work usually treats microservice deployment and request routing as two separate issues, ignoring the interaction between them. Therefore, this paper focuses on the joint optimization of microservice deployment and request routing in the multi-edge cloud scenarios. We establish a problem model for minimizing the average response latency, considering the transmission of requests across edge clouds. Then, in view of the complexity of the scene, this paper proposes a joint training strategy of microservice deployment and request routing based on deep reinforcement learning and Best Fit Decreasing algorithm. The algorithm takes the change of microservice deployment scheme as the action of the agent, introduces the Best Fit Decreasing algorithm to construct request routing based on the deployment scheme, and calculates rewards using the complete joint microservice deployment and request routing scheme for subsequent network training. Finally, experimental results show that the proposed algorithm can effectively reduce the response time delay and system running power compared with other algorithms. | 10.1109/TNSM.2025.3543568 |
Takanori Hara, Masahiro Sasabe | eBPF-Based Ordered Proof of Transit for Trustworthy Service Function Chaining | 2025 | Early Access | Security Routing Kernel Metadata Polynomials Software Relays Linux Hardware Vectors Service Function Chaining (SFC) extended Berkeley Packet Filter (eBPF) Ordered Proof-of-Transit (OPoT) Segment Routing over IPv6 Data Plane (SRv6) SFC proxy | Service function chaining (SFC) establishes a service path where a sequence of functions is executed according to service requirements. However, SFC lacks a mechanism to ensure proper traversal of relay nodes in the data plane. Misconfigurations and the presence of attackers can lead to forwarding anomalies and path deviation, potentially allowing packets to bypass security network functions in the service path. To mitigate potential security breaches, ordered proof of transit (OPoT) has been proposed as a mechanism to verify whether traffic adheres to the designated path. In this paper, we realize lightweight OPoT-based path verification based on extended Berkeley Packet Filter (eBPF) for trustworthy SFC. Furthermore, by integrating it with the existing SFC proxy, we extend the proposed approach to accommodate both SFC-aware and SFC-unaware virtual network functions (VNFs) in the segment routing over IPv6 data plane (SRv6) domain. Through experiments, we demonstrate the capability of the proposed approach to detect path deviations. Additionally, we reveal the performance limitations of the proposed approach. | 10.1109/TNSM.2025.3550333 |
Giuseppe Ruggeri, Marica Amadeo, Claudia Campolo, Antonella Molinaro | Optimal Placement of the Virtualized Federated Learning Aggregation Function at the Edge | 2025 | Early Access | Training Computational modeling Servers Proposals Optimization Convergence Heuristic algorithms Accuracy Virtual machines Federated learning Federated Learning Aggregation Placement Edge | Federated Learning (FL) enables multiple devices (clients) training a shared machine learning (ML) model on local datasets and then sending the updated models to a central server, whose task is aggregating the locally-computed updates and sharing the learned global model again with the clients in an iterative process. The population of clients may change at each round, whereas the node executing the aggregation function is typically placed at an edge domain and remains static until the end of the overall FL training process. Indeed, the computing capabilities of the edge node hosting the aggregation function and the distance (latency) of such a node from the selected clients can highly affect the convergence rate of the FL training procedure. Moreover, the heterogeneous time-varying capabilities of edge nodes, coupled with the dynamic client population selected at each round, call for the optimal dynamic placement of the aggregation function across the available nodes in an edge domain. In this work, we formulate an optimization problem for the placement of the FL aggregation function, which aims to select at each round the edge node able to minimize the overall per-round training time, encompassing the aggregation time, the local training time at the clients and the time for exchanging the global model and the model updates. A time-efficient greedy heuristics is proposed, which is shown to well approximate the optimal solution and outperform the considered benchmark solutions. | 10.1109/TNSM.2025.3551257 |
Dev Gurung, Shiva Raj Pokhrel, Gang Li | Quantum Federated Learning for Metaverse: Analysis, Design and Implementation | 2025 | Early Access | Metaverse Blockchains Quantum computing Federated learning Training Peer-to-peer computing Quantum state Quantum circuit Observers Security Quantum Federated Learning Metaverse Blockchain | We present a novel decentralized and trustworthy Quantum Federated Learning (QFL) framework tailored for the emerging Metaverse. This virtual environment, enabling social interaction, gaming, and commerce, demands secure and transparent systems. By integrating blockchain, our QFL framework ensures integrity, resilience, and transparency. Comparative analysis with classical Federated Learning (CFL) highlights its practicality and advantages in distributed settings. New insights developed emphasize the importance of decentralized systems for the Metaverse’s evolution, with a blockchain-based QFL application demonstrated in a hybrid model. Our evaluation, implementation details and code are publicly available 1. | 10.1109/TNSM.2025.3552307 |
Yu-Fang Chen, Frank Yeong-Sung Lin, Sheng-Yung Hsu, Tzu-Lung Sun, Yennun Huang, Chiu-Han Hsiao | Adaptive Traffic Control: OpenFlow-Based Prioritization Strategies for Achieving High Quality of Service in Software-Defined Networking | 2025 | Early Access | Resource management Quality of service Delays Protocols Heuristic algorithms Dynamic scheduling 6G mobile communication Servers IP networks Optimization Lagrangian Relaxation (LR) Network Management OpenFlow Priority Scheduling Quality of Service (QoS) Resource Allocation Software-Defined Networking (SDN) | This paper tackles key challenges in Software-Defined Networking (SDN) by proposing a novel approach for optimizing resource allocation and dynamic priority assignment using OpenFlows priority field. The proposed Lagrangian relaxation (LR)-based algorithms significantly reduces network delay, achieving performance management with dynamic priority levels while demonstrating adaptability and efficiency in a sliced network. The algorithms’ effectiveness were validated through computational experiments, highlighting the strong potential for QoS management across diverse industries. Compared to the Same Priority baseline, the proposed methods: RPA, AP–1, and AP–2, exhibited notable performance improvements, particularly under strict delay constraints. For future applications, the study recommends expanding the algorithm to handle larger networks, integrating it with artificial intelligence technologies for proactive resource optimization. Additionally, the proposed methods lay a solid foundation for addressing the unique demands of 6G networks, particularly in areas such as base station mobility (Low-Earth Orbit, LEO), ultra-low latency, and multi-path transmission strategies. | 10.1109/TNSM.2025.3540012 |
Luqi Wang, Shanchen Pang, Haiyuan Gui, Xiao He, Nuanlai Wang, Sibo Qiao, Zhiyuan Zhao | Sustainable Energy-Efficient Multi-Objective Task Processing Based on Edge Computing | 2025 | Early Access | Servers Energy efficiency Data privacy Computer architecture Energy consumption Sustainable development Real-time systems Low latency communication Computational modeling Adaptation models Edge computing energy-efficient intelligent reflective surface dynamic voltage and frequency scaling differential privacy | As smart cities evolve, rising computational demands strain infrastructures. Offloading tasks to edge cloud data centers offers potential but faces challenges like high latency, energy use, and data leakage, especially in dense urban areas. This paper presents a low-latency, energy-efficient digital twin (DT) architecture tailored for smart cities, integrating edge computing (EC) and multiple intelligent reflective surfaces (IRS) to enhance communication. Dynamic voltage and frequency scaling (DVFS) technology is considered for user devices to reduce energy consumption. To mitigate the risk of user privacy leakage during task offloading, we address sensitive user location data that may be exposed by proposing a perturbed sliding task queue (PSTQ) algorithm based on differential privacy (DP), and demonstrate the effectiveness of the algorithm. To optimize task processing time and energy efficiency, we decompose the complex problem using block coordinate descent and propose an intelligent scheduling for energy sustainability (ISES) algorithm based on Karush-Kuhn-Tucker conditions and deep reinforcement learning (DRL). Experimental results demonstrate that our proposed architecture and algorithms achieve over 90% improvement in key optimization objectives, alleviating the computational pressure on existing devices while significantly enhancing task processing efficiency and energy sustainability. | 10.1109/TNSM.2025.3553259 |
Dahina Koulougli, Kim Khoa Nguyen, Mohamed Cheriet | Cost Optimization Of FlexEthernet Over Elastic Optical Network Fronthaul Design | 2025 | Early Access | 5G mobile communication Uncertainty Computer architecture Bit rate Optimization Ethernet Costs Bridges Elastic optical networks Stochastic processes O-RAN Fronthaul FlexEthernet Elastic Optical Networks Uncertain Traffic Demands Deep Reinforcement Learning | Without network slicing supports, traditional Fronthaul architectures struggle to meet the demanding requirements of 5G networks, such as the ultra-low latency and high bit rate specified by the enhanced common public radio interface (eCPRI). In this paper, we design a novel Fronthaul architecture that leverages FlexEthernet (FlexE) over elastic optical network (EON) to enable Fronthaul slicing meeting 5G Fronthaul requirements. Our Fronthaul design is optimized by an integer linear programming (ILP) model, named eFFP, that minimizes the total cost of ownership (TCO). While eFFP meets the strict Fronthaul requirements by provisioning network resources based on worst-case traffic load, it tends to overestimate required bit rate as a result of the inherent uncertainty and variability in real-world traffic. To tackle this challenge, we introduce uFFP, a stochastic Fronthaul provisioning strategy tailored to accommodate uncertain traffic demands and mitigate expenditure wastage. Relying on historical data, uFFP assesses statistical characteristics of traffic patterns to better estimate Fronthaul bit rate. Subsequently, we employ chance-constrained optimization to reformulate the uFFP problem, which is approximately solved using a convex relaxation approach known as uFFPA, and optimally solved using a deep reinforcement learning (DRL) approach called uFFPL. Simulation results demonstrate that our proposed solutions achieve significant cost savings, reducing TCO by 39.79% compared to the baseline. | 10.1109/TNSM.2025.3553417 |
Giulio Sidoretti, Lorenzo Bracciale, Stefano Salsano, Hesham Elbakoury, Pierpaolo Loreti | DIDA: Distributed In-Network Intelligent Data Plane for Machine Learning Applications | 2025 | Early Access | Switches Feature extraction Machine learning Data mining Distributed databases Data centers Telemetry Scalability Training Network interfaces Intelligent Data Plane Computer network management Distributed computing Machine learning Network security Telemetry | Recent advances in network switch designs have enabled machine learning inference directly within the switch at line speed. However, hardware constraints limit switches capabilities of tracking stateful features essential for accurate inference, as the demand for these features grows rapidly with line rates. To address this, we propose DIDA, a distributed in-network machine learning approach. In DIDA, feature extraction occurs at the host, features are transmitted via in-band telemetry, and inference is performed on the switches. In this paper, we evaluate the effectiveness and efficiency of this architecture. We examine its impact on network bandwidth, CPU and memory usage at the host, and its robustness across different feature sets and deep neural network classifications. | 10.1109/TNSM.2025.3548477 |
Jinghui Chen, Qingqing Cai, Gang Sun, Hongfang Yu, Dusit Niyato | CRP: A Cluster-Based Routing Protocol for Lightweight Nodes in Payment Channel Networks | 2025 | Early Access | Routing protocols Blockchains Routing Topology Throughput Scalability Internet of Things Network topology Lightning Sun Internet of Things blockchain payment channel networks clustering routing | Although blockchain empowers the IoT trading market and presents new development opportunities for IoT, scalability issues of blockchain limit its application in this area. Payment Channel Networks (PCNs) have emerged as a promising solution to address the scalability issues. With the help of routing protocols, two users can utilize payment channels to conduct off-chain transactions. However, most Payment Channel Network (PCN) routing protocols overlook the scalability of PCNs, resulting in substantial storage, communication, and computational overhead for lightweight nodes, such as IoT devices. Additionally, frequent utilization of a payment channel can quickly exhaust the channel’s balance, leading to congestion and causing subsequent payments to fail. Channel congestion restricts the throughput of PCNs, yet most PCN routing protocols lack designs for channel congestion control. In this paper, we propose a Cluster-based scalable and high-throughput Routing Protocol (CRP), to enhance the scalability and throughput of PCNs. CRP organizes PCNs into clusters to reduce the average routing table size, thereby alleviating users’ storage, communication, and computational overhead. Furthermore, CRP aims to minimize maximum channel congestion when selecting payment routes, thereby improving throughput. Extensive simulations demonstrate that CRP achieves high scalability and throughput compared to state-of-the-art PCN routing protocols. | 10.1109/TNSM.2025.3555174 |
Milan Groshev, Lanfranco Zanzi, Carmen Delgado, Xi Li, Antonio de la Oliva, Xavier Costa-Pérez | Energy-Aware Joint Orchestration of 5G and Robots: Experimental Testbed and Field Validation | 2025 | Early Access | Robots Robot kinematics 5G mobile communication Robot sensing systems Sensors Resource management Real-time systems Energy consumption Testing Peer-to-peer computing 5G Orchestration Robotics Optimization Offloading Energy Efficient | 5G mobile networks introduce a new dimension for connecting and operating mobile robots in outdoor environments, leveraging cloud-native and offloading features of 5G networks to enable fully flexible and collaborative cloud robot operations. However, the limited battery life of robots remains a significant obstacle to their effective adoption in real-world exploration scenarios. This paper explores, via field experiments, the potential energy-saving gains of, a joint orchestration of 5G and Robot Operating System (ROS) that coordinates multiple 5G-connected robots both in terms of navigation and sensing, as well as optimizes their cloud-native service resource utilization while minimizing total resource and energy consumption on the robots based on real-time feedback. We designed, implemented and evaluated our proposed in an experimental testbed composed of commercial off-the-shelf robots and a local 5G infrastructure deployed on a campus. The experimental results demonstrated that significantly outperforms state-of-the-art approaches in terms of energy savings by offloading demanding computational tasks to the 5G edge infrastructure and dynamic energy management of on-board sensors (e.g., switching them off when they are not needed). This strategy achieves approximately ~15% energy savings on the robots, thereby extending battery life, which in turn allows for longer operating times and better resource utilization. | 10.1109/TNSM.2025.3555126 |
Ximin Li, Xiaodong Xu, Guo Wei, Xiaowei Qin | Unveiling Real-Time Stalling Detection for Video Streaming Traffic | 2025 | Early Access | Feature extraction Streaming media Real-time systems Quality of experience Monitoring Training Sensitivity Accuracy Machine learning Predictive models Stalling detection real-time monitoring HTTP adaptive streaming machine learning quality of experience | In the rapidly evolving field of video traffic, ensuring a smooth video streaming experience for users is critical for network operators. Accurately and promptly detecting stalling events, a significant indicator of poor quality of experience, remains challenging due to varying detection time resolutions in existing techniques, which often detect stalls every video chunk, or every five or ten seconds. This paper makes three key contributions. First, we introduce the concept of detection granularities to enable fair performance comparisons and reveal their impact on detection performance from the data sampling perspective. Second, we propose a novel feature extraction approach that captures both packet-level and chunk-level features in a unified sequential manner to effectively detect stalling events. Third, a novel sample reweighting method is proposed to address the detection timeliness problem by focusing more on difficult samples around stalling starting or ending. Experimental results on both video-on-demand and live streaming traces demonstrate that our feature extraction approach achieves an average improvement of 5.3% in f1-score, 4.7% in coverage rate, and reduces stalling response time by 0.4 seconds compared to existing techniques. Additionally, the sample reweighting method further improves the detection sensitivity without compromising f1-scores for all detection techniques. | 10.1109/TNSM.2025.3554822 |
Sławomir Hanczewski, Maciej Stasiak, Joanna Weissenberg, Michał Weissenberg | Modelling of Heterogeneous 5G Network Slice for Smart Real-Time Railway Communications | 2025 | Early Access | Real-time systems Rail transportation Analytical models 5G mobile communication Quality of service Streams Monitoring Data communication Delays Focusing railway control system 5G slice real-time systems critical flows Markov chain | This paper presents an analytical model for a railway mobile communications system. In line with recent trends, the system’s operation relies on 5G network resources (slices). It efficiently manages critical data streams (flows) that meet the stringent requirements of real-time systems (systems that handle hard and soft real-time services). Additionally, the proposed solution accommodates data with less stringent QoS parameters compared to real-time streams. The analytical model serves as an approximation of the process occurring in the system for servicing flows and has been developed based on the analysis of a Markov chain, where the states correspond to the states of the examined system. Due to the approximate nature of the analytical model, the results derived from it were compared with those obtained from the simulation experiment. | 10.1109/TNSM.2025.3547762 |
Abdullah Aljumah, Tariq Ahamed Ahanger, Imdad Ullah | Challenges in Securing UAV IoT Framework: Future Research Perspective | 2025 | Early Access | Internet of Things Security Autonomous aerial vehicles Privacy Wind Protocols Drag Meteorology Energy consumption Wireless sensor networks Unmanned Arial Vehicle Security Internet of Things | Unmanned Aerial Vehicles (UAVs) offer the immense capability for allowing novel applications in a variety of domains including security, military, surveillance, medicine, and traffic monitoring. The prevalence of UAV systems is due to the collaboration and accomplishment of tasks efficiently and effectively. UAVs embedded with camcorders, GPS receivers, and wireless sensors propose enormous promise in realizing the Internet of Things (IoT) service delivery in vast domains. It results in establishing an airborne field of the IoT when empowered with communication protocols of LTE, 4G, and 5G/6G networks. However, numerous difficulties must be addressed before UAVs may be used effectively namely privacy, security, and administration. Conspicuously, in the current article, novel UAV-specific domains enabled by IoT and 5G/6G technology are explored. Moreover, the presented technique assesses sensor requirements and provides an overview of fleet management systems that address aerial networking, privacy, and security concerns. Furthermore, a framework based on the IoT-5G/6G aspect is proposed which can be deployed over UAVs. Finally, in a heterogeneous computational platform, the proposed framework provides a complete IoT architecture that enables secure UAVs. | 10.1109/TNSM.2025.3554354 |
Xuan Zheng, Xiuli Ma, Lifu Xu, Yanliang Jin, Chun Ke | Augmentation and Fusion: Multi-Feature Fusion Based Self-Supervised Learning Approach for Traffic Tables | 2025 | Early Access | Feature extraction Classification algorithms Cryptography Data augmentation Payloads Natural language processing Machine learning algorithms Data mining Contrastive learning Encoding encrypted traffic classification self-supervised learning contrastive learning tabular data | As modern networks face increasing demands for superior service and management, Encrypted Traffic Classification (ETC) technology has become increasingly crucial. Considering that traffic data is easy to collect but hard to label, self-supervised ETC methods have attracted more and more attention. Compared to popular methods based on traffic images and text, traffic tables are simple to construct and more suitable for the flow-packet structure. However, existing methods have two problems: (1) The lack of data augmentation methods for tables weakens the performance of self-supervised learning. (2) Most methods only focus on single feature and cannot make full use of distinct features of traffic tables, such as temporal feature. To solve these problems, we propose a multi-feature fusion method based self-supervised learning approach for traffic tables. A new data augmentation method called Random Subsets Selection (RSS) is introduced alongside an effective fusion approach. In this way, temporal features can be successfully extracted and concatenated with the latent representations of input traffic tables. Experimental results from two open datasets and one self-collected dataset have shown that on imbalanced datasets, our method can effectively solve ETC problems even with a small number of labeled data. Empirically, both classification performance and processing speed are improved. Specifically, compared to the state-of-the-art tabular self-supervised learning method, our method achieves the better classification results on all datasets while the processing speed increases by almost two times, from 1.83 tables per second to 3.76 tables per second. | 10.1109/TNSM.2025.3554824 |
Qiaolun Zhang, Omran Ayoub, Ruikun Wang, Emanuele Viadana, Francesco Musumeci, Massimo Tornatore | Capacity Sharing for Survivable Virtual Network Mapping Against Double-Link Failures | 2025 | Early Access | Quality of service Sustainable development Resilience Numerical models Network slicing Heuristic algorithms Costs Training Integer linear programming Image color analysis 6G Networks Network Slicing Survivable Virtual Network Mapping Capacity Sharing Integer Linear Programming | Network slicing, a key technology for 6G communications, allows diverse services to coexist on a shared physical infrastructure by allocating different resources to virtual networks (VNs, or equivalently, network slices) mapped over the shared infrastructure. However, it presents challenges in terms of failure survivability, as the failure of one physical element can lead to the failure of multiple VNs mapped to it, making survivability of ultra-reliable services against multiple failures a crucial research topic. In this study, we investigate the Survivable Virtual Network Mapping (SVNM) problem, focusing on double-link failures. SVNM against double-link failures can be guaranteed by enforcing appropriate SVNM constraints (e.g., any double-link failure cannot disconnect any virtual node from other virtual nodes in the same VN), but this approach requires excessive redundant capacity deployment. To address this issue, we propose a novel technique called SVNM with Inter-VN Capacity Sharing (SINC), which allows capacity sharing across different VNs to improve survivability against double-link failures with efficient spare capacity utilization. Since SINC may fail to reconnect some VNs due to insufficient spare capacity, we propose combining it with a spare slice (a VN fully dedicated to enhancing survivability) to create an advanced version, SINC+, which improves survivability by reconnecting VNs with additional spare capacity. We then formulate both SINC and SINC+ through Integer Linear Programming (ILP) models, which can provide optimal solutions. Moreover, to address the computational limitations of the ILP formulation, we developed scalable heuristic algorithms applicable to both SINC and SINC+ with a small optimality gap. Our numerical results show that VN availability using SINC improves by up to 9.48% over SVNM, with the same total link resource consumption (TLRC). Furthermore, SINC+ ensures VN survivability against all potential double-link failures, and the additional TLRC can be reduced to less than 1% in presence of a high number of VNs or large nodal connectivity, underscoring the sustainability of our proposed solutions. | 10.1109/TNSM.2025.3555678 |
Xiaochang Guo, Gang Liu, Haoyan Ling, Lei Meng, Tao Wang | BECHAIN: A Sharding Blockchain With Higher Security | 2025 | Early Access | Blockchains Security Sharding Scalability Throughput Reliability Adaptation models Protocols Consensus protocol Resource management Blockchain Sharding Byzantine Fault Tolerance Consensus reliability | Sharding technology achieves parallel processing of transactions by dividing the network into multiple independent parts, namely shards, significantly increasing the throughput of the blockchain system and reducing transaction processing latency, thereby improving its scalability. Although sharding technology enhances blockchain performance, it also introduces new security challenges, as an individual shard is more vulnerable to attacks compared to the entire network, potentially compromising its consensus reliability. To address these challenges, we propose BECHAIN: a sharding blockchain system with excellent Byzantine node tolerance. It incorporates a series of effective security measures, such as improved node allocation methods, enhanced inter-shard collaborative defense mechanisms, and refined malicious node monitoring strategies, to bolster the blockchain system’s defense against malicious nodes. Key measures include random node allocation, a node reputation scoring model, consensus supervision chain, and shard reconfiguration. Simulation results show that BECHAIN achieves linear scalability and enhances system security by increasing the consensus success rate. | 10.1109/TNSM.2025.3556456 |
Rui Tang, Ruizhi Zhang, Yongjun Xu, Chuan Liu, Chongwen Huang, Chau Yuen | Resource Allocation for Underwater Acoustic Sensor Networks With Partial Spectrum Sharing: When Optimization Meets Deep Reinforcement Learning | 2025 | Early Access | Resource management Signal processing algorithms Underwater acoustics Approximation algorithms Optimization Inference algorithms Bit rate Training NOMA Interference cancellation Underwater acoustic sensor network partial spectrum sharing power allocation spectrum assignment optimization theory deep reinforcement learning | To utilize the limited acoustic spectrum while combating the harsh underwater propagation, we incorporate partial spectrum sharing into an underwater acoustic sensor network and aim to maximize the minimum data collection rate among all underwater sensor nodes through joint power allocation and spectrum assignment. To cope with the non-convex optimization problem, we propose a Hybrid Model-based and Data-based Resource Allocation (HMDRA) scheme: 1) Under any given spectrum assignment strategy, we analyze the impact of the partial spectrum sharing and imperfect successive interference cancellation on baseband signal processing, and formulate a power allocation problem that is solved by the bisection method and Lagrange dual theory. 2) Based on the optimal power allocation strategy, the gradient-free genetic algorithm (GA) is first adopted to approach the optimal solution of the model-less spectrum assignment problem by nearly enumerating the solution space. To reduce complexity, we further propose a deep reinforcement learning (DRL)-based algorithm and obtain an efficient solution by traversing a deep neural network-based policy learned from the training stage. Simulation results show that compared with the GA-based algorithm, the average execution time of the DRL-based algorithm is substantially reduced by 5 orders of magnitude to 0.7076 seconds at the cost of approximately 6 percent performance loss. | 10.1109/TNSM.2025.3556498 |
Andre V. S. Xavier, Raul C. Almeida, Leonardo D. Coelho, Joaquim F. Martins-Filho | Classification-Model Applied to Routing Problem in Flexible-Grid Optical Networks | 2025 | Early Access | Classification algorithms Routing Heuristic algorithms Optical fiber networks Resource management Proposals Training Artificial neural networks Clustering algorithms Network topology Machine Learning Classification Routing Optical Networks | In recent years, machine learning algorithms have been widely used in optical networks to solve complex problems such as routing, resource allocation, among others. In routing, modulation and spectrum allocation (RMSA) problems, machine learning algorithms can be used to learn patterns in historical data and find good solutions without having to explore all existing solutions. In this paper, we propose an algorithm based on a classification model to solve the routing problem in elastic optical networks. This algorithm predicts the route according to the call request information and the state of the network links. The dataset used to train the proposal is obtained through a dynamic routing algorithm. With this dataset, two versions of the proposal are evaluated with different sets of routes according to the frequency distribution of these routes. Three network topologies are used to evaluate the routing algorithms: six-node, NSFNET and European optical network. The results are compared with two other routing algorithms: Yen’s algorithm (k shortest routes) and the spectrum continuity based shortest path (SCSP) algorithm. This last algorithm is used to train our proposal. Our proposal outperformed the Yen’s algorithm in the three network topologies in terms of blocking probability. When compared to the SCSP algorithm, our proposal obtained an average performance gain of 15% and 25% in the six-node and NSFNET network topologies, respectively. In the European network topology, our proposal achieved an average performance gain at the lowest network loads of 23.19%. In all network topologies considered, our proposal reduced the time spent to find the RMSA solution compared to the SCSP algorithm. | 10.1109/TNSM.2025.3556770 |
Panagiotis Nikolaidis, John Baras | Robust Resource Sharing in Network Slicing via Hypothesis Testing | 2025 | Early Access | Resource management Quality of service Testing Service level agreements Network slicing Bandwidth Traffic control Costs Delays Anomaly detection network slicing resource sharing multiplexing overbooking isolation anomaly detection LTE 5G | In network slicing, the network operator needs to satisfy the service level agreements of multiple slices at the same time and on the same physical infrastructure. To do so with reduced provisioned resources, the operator may consider resource sharing mechanisms. However, each slice then becomes susceptible to traffic surges in other slices which degrades performance isolation. To maintain both high efficiency and high isolation, we propose the introduction of hypothesis testing in resource sharing. Our approach comprises two phases. In the trial phase, the operator obtains a stochastic model for each slice that describes its normal behavior, provisions resources and then signs the service level agreements. In the regular phase, whenever there is resource contention, hypothesis testing is conducted to check which slices follow their normal behavior. Slices that fail the test are excluded from resource sharing to protect the well-behaved ones. We test our approach on a mobile traffic dataset. Results show that our approach fortifies the service level agreements against unexpected traffic patterns and achieves high efficiency via resource sharing. Overall, our approach provides an appealing tradeoff between efficiency and isolation. | 10.1109/TNSM.2025.3556752 |