Last updated: 2026-03-28 05:01 UTC
All documents
Number of pages: 160
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Raffaele Carillo, Francesco Cerasuolo, Giampaolo Bovenzi, Domenico Ciuonzo, Antonio Pescapé | A Federated and Incremental Network Intrusion Detection System for IoT Emerging Threats | 2026 | Early Access | Training Incremental learning Adaptation models Internet of Things Convolutional neural networks Reviews Payloads Network intrusion detection Long short term memory Federated learning Network Intrusion Detection Systems Internet of Things Federated Learning Class Incremental Learning 0-day attacks | Ensuring network security is increasingly challenging, especially in the Internet of Things (IoT) domain, where threats are diverse, rapidly evolving, and often device-specific. Hence, Network Intrusion Detection Systems (NIDSs) require (i) being trained on network traffic gathered in different collection points to cover the attack traffic heterogeneity, (ii) continuously learning emerging threats (viz., 0-day attacks), and (iii) be able to take attack countermeasures as soon as possible. In this work, we aim to improve Artificial Intelligence (AI)-based NIDS design & maintenance by integrating Federated Learning (FL) and Class Incremental Learning (CIL). Specifically, we devise a Federated Class Incremental Learning (FCIL) framework–suited for early-detection settings—that supports decentralized and continual model updates, investigating the non-trivial intersection of FL algorithms with state-of-the-art CIL techniques to enable scalable, privacy-preserving training in highly non-IID environments. We evaluate FCIL on three IoT datasets across different client scenarios to assess its ability to learn new threats and retain prior knowledge. The experiments assess potential key challenges in generalization and few-sample training, and compare NIDS performance to monolithic and centralized baselines. | 10.1109/TNSM.2026.3675031 |
| Ei Theingi, Lokman Sboui, Diala Naboulsi | Adaptive and Energy-Efficient Deployment of Robotic Airborne Base Stations: A Deep Reinforcement Learning Approach | 2026 | Early Access | The increasing energy demands of future wireless networks drive the need for intelligent and adaptive deployment strategies. Traditional methods often lack the flexibility required to handle the spatio-temporal fluctuations inherent in modern communication environments. To address this challenge, we investigate the energy-efficient deployment of Robotic Airborne Base Stations (RABSs) in practical scenarios, such as managing sudden traffic surges during large-scale public events and providing emergency coverage in disaster-stricken areas where terrestrial infrastructure is compromised. We propose a novel Deep Reinforcement Learning (DRL)-based framework for an energy-efficient deployment of multiple RABSs. Unlike existing approaches, our framework features both centralized and decentralized Actor-Critic DRL, enabling scalable and adaptive decision-making. The centralized model leverages global network information to optimize the collective deployment of RABSs, while the multi-agent decentralized approach allows RABSs to make independent yet coordinated decisions based on local observations, ensuring scalability in large-scale networks. In addition, we introduce a state-action representation that captures spatio-temporal traffic variations and energy consumption dynamics. Our simulations validate the effectiveness of the proposed framework, demonstrating significant improvements in energy efficiency and adaptability compared to heuristic, Gauss-Markov, and Q-Learning models. Furthermore, comparison with an exhaustive search benchmark confirms that our approach achieves an optimal energy efficiency with significantly lower computational complexity. | 10.1109/TNSM.2026.3678488 | |
| Amin Mohajer, Abbas Mirzaei, Mostafa Darabi, Xavier Fernando | Joint SLA-Aware Task Offloading and Adaptive Service Orchestration with Graph-Attentive Multi-Agent Reinforcement Learning | 2026 | Early Access | Quality of service Resource management Observability Training Delays Job shop scheduling Dynamic scheduling Bandwidth Vehicle dynamics Thermal stability Edge intelligence network slicing QoS-aware scheduling graph attention networks adaptive resource allocation | Coordinated service offloading is essential to meet Quality-of-Service (QoS) targets under non-stationary edge traffic. Yet conventional schedulers lack dynamic prioritization, causing deadline violations for delay-sensitive, lower-priority flows. We present PRONTO, a multi-agent framework with centralized training and decentralized execution (CTDE) that jointly optimizes SLA-aware offloading and adaptive service orchestration. PRONTO builds on Twin Delayed Deep Deterministic Policy Gradient (TD3) and incorporates spatiotemporal, topology-aware graph attention with top-K masking and temperature scaling to encode neighborhood influence at linear coordination cost. Gated Recurrent Units (GRUs) filter temporal features, while a hybrid reward couples task urgency, SLA satisfaction, and utilization costs. A priority-aware slicing policy divides bandwidth and compute between latency-critical and throughput-oriented flows. To improve robustness, we employ stability regularizers (temporal smoothing and confidence-weighted neighbor alignment), mitigating action jitter under bursts. Extensive evaluations show superior QoS and channel utilization, with up to 27.4% lower service delay and over 18% higher SLA Satisfaction Rate (SSR) compared with strong baselines. | 10.1109/TNSM.2026.3673188 |
| Junyan Guo, Shuang Yao, Yue Song, Le Zhang, Xu Han, Liyuan Chang | EF-CPPA: Escrow-Free Conditional Privacy-Preserving Authentication Scheme for Real-Time Emergency Messages in Smart Grids | 2026 | Early Access | Authentication Smart grids Security Privacy Smart meters Logic gates Real-time systems Vehicle dynamics Time factors Power system reliability Smart grid emergency message authentication conditional privacy preservation escrow-free key generation unlinkability dynamic joining and revocation | Timely and secure emergency message delivery is critical to resilient smart-grid operation and rapid disturbance response. However, existing schemes remain inadequate, leaving smart grids vulnerable to security and privacy threats and causing verification bottlenecks, particularly when nonlinear emergency measurements cannot be homomorphically aggregated, which prevents bandwidth-efficient in-network aggregation and scalable batch verification. We propose EF-CPPA, an escrow-free, conditional privacy-preserving authentication scheme for real-time emergency messaging in smart grids. EF-CPPA enables smart meters to deliver authenticated emergency messages to the CC via power gateways verifiable as legitimate relays, while ensuring the confidentiality, integrity, and unlinkability of embedded nonlinear measurements. EF-CPPA further provides conditional anonymity with accountable tracing, as well as origin authentication, intra-domain verification, and scalable batch verification under bursty multi-meter messaging. An ECDLP-based escrow-free key-generation mechanism reduces reliance on the CC and enables efficient node joining and revocation. Security analysis shows that EF-CPPA achieves existential unforgeability under chosen-message attacks (EUF-CMA) and satisfies the stated security and privacy requirements. Performance evaluation demonstrates low computational, communication, energy, and node-management overhead, making EF-CPPA suitable for security-critical, time-sensitive smart-grid emergency messaging. | 10.1109/TNSM.2026.3672754 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Archana Ojha, Om Jee Pandey, Prasenjit Chanak | Energy-Efficient Network Cut Detection and Recovery Mechanism for Cluster-Based IoT Networks | 2026 | Early Access | Wireless sensor networks Data collection Energy consumption Relays Internet of Things Delays Data communication Detection algorithms Smart cities Routing Wireless sensor networks (WSNs) internet of things (IoT) data routing network cut detection and recovery reinforcement learning brain storm optimization (RLBSO) mobile data collector (MDC) | Recently, the Internet of Things (IoT) has found widespread applications in diverse fields, including environmental monitoring, Industry 4.0, smart cities, and smart agriculture. In these applications, sensor nodes form Wireless Sensor Networks (WSNs) and collect data from the monitoring environment. Sensor nodes are vulnerable to various faults, including battery depletion and hardware malfunctions. These faulty nodes cut/partition the network into several isolated segments. Therefore, several non-faulty nodes become disconnected from the Base Station (BS)/Sink and are unable to transmit their data to the BS. It is subject to the early demise of the network. Network cuts also significantly degrade overall network performance. Once the network is divided into isolated segments, it is very difficult to detect and collect data from them. Therefore, this paper proposes a Mobile Data Collector (MDC)-based data-gathering approach for WSNs to collect data from isolated segments. This paper proposes a novel MDC-based network cut detection algorithm that identifies the formation of network cuts in WSNs. A network recovery algorithm is also proposed to enable data collection from the isolated segment. Furthermore, this paper proposes a Reinforcement learning Brain Storm Optimization (RLBSO) algorithm for optimal selection of Rendezvous Points (RPs) and optimal MDC path design. It significantly reduces data-gathering time across isolated network segments. The simulation and testbed results show that the proposed approach outperforms existing state-of-the-art approaches in terms of network lifetime, data collection ratio, energy consumption, and latency. | 10.1109/TNSM.2026.3677868 |
| Jianwei Zhang, Bowen Cui | Bandwidth-Delay Optimal Segment Routing: Upper-Bound and Lower-Bound Algorithms | 2026 | Early Access | Routing Optimization Quality of service Delays Complexity theory Bandwidth Topology Network topology Measurement Approximation algorithms Segment routing quality-of-service routing multicriteria optimization labeling algorithm | Segment routing (SR) is a novel source routing paradigm that enables network programmability. However, existing research rarely considers multicriteria optimization problems in SR networks. Given the critical role of bandwidth and delay in quality-of-service (QoS) routing, we formally define the bandwidth-delay optimal SR (BDoSR) problem for the first time and prove its NP-hardness. By leveraging the label correcting algorithm schema, we design a suite of polynomial-time algorithms, including an upper-bound algorithm (BDoSR-UB) and a lower-bound algorithm (BDoSR-LB). BDoSR-UB enables rapid estimation of the optimal solution while BDoSR-LB is accuracy-adjustable and delivers (near-)optimal feasible solutions. We rigorously analyze their performance gap through carefully constructed network examples, providing deep insights into the adjustable parameters of BDoSR-LB. Finally, we validate our algorithms on realistic network topologies, demonstrating that both BDoSR-UB and BDoSR-LB frequently converge to the optimal solution in practice while offering superior computational efficiency compared to existing approaches. | 10.1109/TNSM.2026.3678190 |
| Henghua Zhang, Jue Chen, Haidong Peng, Junru Chen | MAT4PM: Machine Learning-Guided Adaptive Threshold Control for P4-based Monitoring in SDNs | 2026 | Early Access | Monitoring Switches Accuracy Control systems Real-time systems Scalability Data collection Adaptation models Telemetry Process control Software-Defined Networking Programmable Data Plane Machine Learning Network Monitoring P4 | This paper presents MAT4PM, a P4-based proactive monitoring framework designed for Software-Defined Networking (SDN). This is the first monitoring framework that combines Programmable Data Plane (PDP) capabilities for event-driven data collection with control plane intelligence for real-time threshold optimization. The architecture consists of a lightweight P4-based monitoring module deployed at the switch, a Machine Learning (ML) inference engine running at the controller, and a P4Runtime feedback channel for real-time threshold updates. Traffic features are leveraged to predict optimal monitoring thresholds, which are then synchronized with the data plane. A composite cost function is introduced to jointly consider monitoring error and communication overhead, guiding the model toward a balanced trade-off between accuracy and efficiency. Experimental evaluation on BMv2 software switches demonstrates that, compared to static threshold strategies, MAT4PM reduces monitoring error to 7.0% and achieves a 5.6% reduction in overall cost, while maintaining sub-millisecond inference latency and minimal resource consumption. These results demonstrate the practical viability and scalability of MAT4PM in SDN environments. | 10.1109/TNSM.2026.3677416 |
| Yanli Liu, Yue Pang, Yidi Wang, Shengnan Li, Jin Li, Min Zhang, Danshi Wang | Developing A Domain-Specific LLM for Optical Networks: A Reinforcement Learning-Based Fine-Tuning Framework | 2026 | Early Access | Optical fiber networks Cognition Accuracy Location awareness Reinforcement learning Adaptation models Semantics Optimization Maintenance Training Large language model reinforcement learning from human feedback reinforced fine-tuning optical networks | Optical networks serve as the backbone of modern communication infrastructure, where efficient operation and maintenance (O&M) are essential for ensuring reliable and high-speed data services. However, traditional network O&M face persistent challenges, including high labor costs, delayed response time, and difficulties in processing massive and complex network data. Although large language models (LLMs) have demonstrated strong capabilities in text understanding, generation, and reasoning, their direct application in optical network O&M is limited by domain-specific knowledge barriers, inherent reasoning biases, and insufficient performance in complex multi-step tasks. To address this issue, this study develops a domain-adaptation and system-implementation framework that applies two established reinforcement learning-based fine-tuning methods (RLHF and ReFT) to construct domain-specialized LLMs for optical network O&M tasks. In the context of log analysis, RLHF achieves improvements of 1.64 points in accuracy, 1.02 points in content richness, and a notable 10-point increase in interactivity over supervised fine-tuning. In alarm localization, ReFT achieves accuracy improvements of 2%–13% across four reasoning tasks. The extensive tests not only demonstrate the practical value of RL-based fine-tuning in enhancing alignment and reasoning for domain-specific applications, but also provides a practical methodology and implementation reference for applying reinforcement learning-based LLM adaptation in optical network O&M environments. | 10.1109/TNSM.2026.3676522 |
| Basharat Ali, Guihai Chen | MIRAGE-DoH: Metamorphic Intelligence and Resilient AI Grid for Autonomous Governance of Encrypted DNS | 2026 | Early Access | Cryptography Domain Name System Fingerprint recognition Accuracy Metadata Artificial intelligence Software Perturbation methods Network security Monitoring Network Security Network Protocol Enhancing Encrypted Network Security Cyber Threats Detection Anomaly Detection Attack Detection Traffic Classification Quantum ML in Encryted DNS | Existing DNS over HTTPS defenses have demonstrated limited resilience against polymorphic traffic shaping, staged tunneling, and adaptive mimicry, largely because they rely on static learning pipelines and rigid cryptographic configurations. MIRAGE-DoH was designed to examine whether adaptive inference, persistent structural encoding, and calibrated cryptographic agility could be integrated into a deployable and measurable encrypted DNS control architecture. The framework combined flow-level Cognitive MetaAgents capable of internal reconfiguration, Topological Memory Networks that preserved stable geometric irregularities across temporal windows, and Metamorphic Cryptographic Shards that adjusted key encapsulation policies according to empirically calibrated threat severity. A Causal Counterfactual Environment modeled constrained attacker decision pathways, while Spectral Game Intelligence analyzed flow interaction graphs to anticipate structural attack transitions.Evaluation on extended CIC-DoH2023 and Gen-C-DDD-2022 datasets was conducted under fixed flow-level decision intervals with explicit accounting for synchronization overhead, spectral graph construction cost, and cryptographic rotation latency. Cross-dataset experiments yielded a mean detection accuracy of 97.8% with a 0.41% false positive rate, sustaining median inference latency of 62μs and cryptographic morph latency of 3.7 ms under load. Quantum-assisted inference was assessed through bounded simulations, indicating constrained information gain within the adopted lattice-based configuration, without asserting unconditional post-quantum immunity. These results demonstrated that adaptive encrypted DNS governance can be empirically grounded, operationally bounded, and stress-evaluated without reliance on unqualified claims of perfect security. | 10.1109/TNSM.2026.3677474 |
| Wenxuan Li, Yu Yao, Ni Zhang, Chuan Sheng, Ziyong Ran, Wei Yang | IMADP: Imputation-Based Anomaly Detection in SCADA Systems via Adversarial Diffusion Process | 2026 | Vol. 23, Issue | Anomaly detection Adaptation models Data models Training SCADA systems Transformers Diffusion models Monitoring Robustness Roads SCADA multi-sensor anomaly detection imputation-based conditional diffusion | As the confrontation of the industrial cybersecurity upgrades, multi-dimensional variables measured by the SCADA multi-sensor are critical for assessing security risks in industrial field devices. While Deep Learning (DL) methods based on generative models have demonstrated effectiveness, the impact of missing features in samples and temporal window size on modeling and detection processes has been consistently overlooked. To address these challenges, this work proposes an IMADP framework that integratively solves two tasks of missingness patching and anomaly detection. Firstly, the Window-based Adaptive Selection Strategy (WASS) is also designed to intelligently window samples, reducing reliance on prior settings. Secondly, an imputer is constructed under WASS to restore sample integrity, which is implemented by a fully-connected network centered on Neural Controlled Differential Equations (NCDEs). Thirdly, a adversarial diffusion detection model with the variant Transformer as the inverse solver is proposed. Additionally, the Adaptive Dynamic Mask Mechanism (ADMM) is built upon to bolster the model’s comprehension of inter-dependencies between time and sensor nodes. Simultaneously, adversarial training is introduced to optimize training and detection latency caused by the excessive diffusion step size during the native Conditional Diffusion process. The experimental results validate that the proposed framework has the capability to build detectors using missing training samples, and its overall detection performance, tested across six datasets, is superior to existing methods. | 10.1109/TNSM.2026.3670062 |
| Zewei Han, Go Hasegawa | BBR-ES: An Extended-State Optimization for BBR Congestion Control | 2026 | Vol. 23, Issue | Delays Bandwidth Internet Heuristic algorithms Videos Throughput Taxonomy Reviews Market research Proposals Congestion control algorithm bottleneck bandwidth and round-trip propagation time (BBR) throughput fairness round trip time (RTT) | In recent years, many optimization proposals for TCP BBR have been introduced, but most rely mainly on delay variations and do not fully resolve BBR’s limitations in RTT fairness, link utilization, and delay control in networks. This paper proposes BBR with Extended State (BBR-ES), which extends BBR’s state machine with a short stabilization state and a trend-based transition mechanism that react to per-flow bandwidth and RTT evolution instead of global delay alone. BBR-ES uses lightweight bandwidth and RTT trend tracking to adjust its sending rate while preserving BBR’s model-based design. Experiments on both emulated (Mininet) and real-world Internet paths (Amazon EC2) show that BBR-ES consistently improves RTT fairness and link utilization over BBRv1, BBRv3, and CUBIC while keeping queuing delay moderate and bounded; in most settings, it achieves Jain’s fairness index above 0.9 and link utilization above 98%. These results indicate that BBR-ES is a practical candidate for deployment in large-scale content delivery and a useful design reference for future model-based congestion control schemes. | 10.1109/TNSM.2026.3668966 |
| Chengyuan Ma, Peng Hu, Tianjiao Ni, Ying Liu, Liangchen Hu, Kaizhong Zuo, Fulong Chen, Yonglong Luo | Privacy-Preserving and Collusion-Resistant Data Query Scheme for Vehicular Platoons | 2026 | Vol. 23, Issue | Security Privacy Data privacy Data aggregation Encryption Servers Roads Aggregates Weather Resistance Vehicular platoons privacy preservation collusion attack data query | Data queries play a crucial role in the vehicular platoon, enabling vehicles to obtain traffic information about surrounding road conditions and personalized entertainment information services. However, data query requests from vehicles may expose the vehicle owner’s personal attributes and habits. Although several schemes can address these issues, they are incapable of countering collusion attacks between the coordinating vehicle and roadside units (RSUs). To solve this problem, in this paper we propose a privacy-preserving and collusion-resistant data query scheme, named PCDQ. Specifically, PCDQ uses the Paillier encryption and Chinese Remainder Theorem to protect the query privacy of vehicle owners, allowing the RSU to recover individual data query requests without associating them with the original vehicles. Next, the parameter update mechanism in PCDQ prevents the coordinating vehicle from obtaining the corresponding mapping information between vehicles and query parameters, thereby resisting collusion attacks between the coordinating vehicle and RSUs. In addition, identity-based signcryption is used to ensure secure parameter distribution among vehicles, and the batch verification enables efficient authentication of query requests. Detailed security proofs and analysis demonstrate that PCDQ satisfies multiple security properties, including resistance to collusion attacks and replay attacks, unlinkability, confidentiality, and authentication and data integrity. Experimental results show that, compared to existing solutions, PCDQ performs better in terms of computation overhead, communication overhead, and network performance. | 10.1109/TNSM.2026.3669013 |
| Zhaoping Li, Mingshu He, Xiaojuan Wang | HKD-Net: Hierarchical Knowledge Distillation Based on Multi-Domain Feature Fusion for Efficient Network Intrusion Detection | 2026 | Vol. 23, Issue | Feature extraction Telecommunication traffic Knowledge engineering Accuracy Deep learning Anomaly detection Adaptation models Network intrusion detection Knowledge transfer Convolutional neural networks Network traffic anomaly detection knowledge distillation multi-domain feature deep learning network intrusion detection | We propose HKD-Net, a hierarchical knowledge distillation network based on multi-domain feature fusion, for efficient network intrusion detection on resource-constrained edge devices. The framework incorporates dedicated feature extraction modules across temporal, frequency, and spatial domains, and introduces a dynamic gating mechanism for adaptive feature fusion, resulting in a more discriminative and comprehensive feature representation. Moreover, a hierarchical distillation mechanism is designed that not only preserves soft labels from the output layer but also aligns intermediate features from spatial, temporal, frequency, and fused domains, enabling efficient knowledge transfer from a large teacher model to a compact student model. Through knowledge distillation, the final lightweight model requires only 278,580 parameters, reducing the number of parameters by approximately 74.68% compared to the teacher, while maintaining high detection accuracy. Extensive experiments on three public datasets (Kitsune, CIRA-CIC-DoHBrw2020, and CICIoT2023) demonstrate that HKD-Net outperforms five state-of-the-art methods, achieving accuracies of 96.72%, 97.19%, and 87.19%, respectively, while reducing parameters by 74.68% and maintaining low computational cost. | 10.1109/TNSM.2026.3668812 |
| Vaishnavi Kasuluru, Luis Blanco, Cristian J. Vaca-Rubio, Engin Zeydan, Albert Bel | AI-Empowered Multivariate Probabilistic Forecasting: A Key Enabler for Sustainability in Open RAN | 2026 | Vol. 23, Issue | Open RAN Forecasting Probabilistic logic Switches Resource management Telecommunication traffic Sustainable development Predictive models Power demand Energy consumption Sustainability open RAN 6G probabilistic forecasting network analytics artificial intelligence | This paper explores the role of multivariate probabilistic forecasting in improving O-RAN operations, focusing on network sustainability aspects. A comprehensive analysis of its potential benefits and challenges, as well as its integration into the O-RAN architecture are described. The paper first presents an overview of the O-RAN architecture and components, followed by an examination of power consumption models relevant to O-RAN deployments and the challenges associated with traditional deterministic models in resource allocation. We then examine the performance of several state-of-the-art probabilistic multivariate forecasting techniques namely, Gaussian Process Vector Autoregression (GPVAR), Temporal Fusion Transformer (TFT) and non-probabilistic multivariate technique namely, Multivariate Long-Short Term Memory (LSTM) and explain their implementation details and provide their evaluations. The simulation results show the effectiveness of these techniques in predicting Physical Resource Block (PRB) utilization and optimizing resource allocation. In particular, significant energy savings –around 20-30%– are achieved, depending on the percentile of the used probabilistic forecasting techniques. The benefits of considering probabilistic forecasting techniques compared to multi-variate LSTM are also analyzed. Our results emphasize the potential of probabilistic forecasting to improve energy efficiency and sustainability in O-RAN operations. | 10.1109/TNSM.2026.3669847 |
| Ziyi Teng, Juan Fang, Neal N. Xiong | DOJS: A Distributed Online Joint Scheme to Optimize Cost in Mobile Edge Networks | 2026 | Vol. 23, Issue | Optimization Costs Resource management Heuristic algorithms Base stations Long short term memory Switches Reinforcement learning Quality of service Handover Edge computing resource allocation cache placement game theory reinforcement learning | Edge computing deploys computing and storage resources at the network edge, thereby providing services closer to terminal users. However, in edge networks, the mobility of terminals, the diversity of requests, and the dynamic nature of wireless channels pose significant challenges for efficiently allocating limited wireless and caching resources among multiple terminal devices. To address the issues of unbalanced network load and high caching costs caused by resource allocation in edge networks, we propose a Distributed Online Joint Optimization Scheme (DOJS). Specifically, we design a joint optimization scheme, referred to as DOJS, which combines centralized user association at the cloud with distributed cache placement at the base stations. This scheme analyzes the impact of terminal device association policies on caching costs and develops a caching cost model that integrates the activity level and content request probability of terminal devices. Based on this model, the relationship between user association selection and caching costs is analyzed, and a Game Theory-based User Association (GTUA) selection algorithm is proposed. In order to adapt to the dynamic characteristics of terminal-user requests in mobile edge networks, we develop a dynamic cache update method LS-TD3, which combines Long Short-Term Memory (LSTM) and Twin Delayed Deep Deterministic policy gradient (TD3). Specifically, we integrate the LSTM layer into the policy model framework of reinforcement learning to better predict the content popularity from dynamic time data, thus improving the accuracy of cache decision making. To further reduce computational complexity and enhance overall system performance, we employ a distributed optimization strategy to improve the dynamic caching decision process. Extensive experimental results demonstrate the superiority of the proposed algorithm in achieving inter-node load balancing and minimizing caching costs. | 10.1109/TNSM.2026.3665360 |
| Mengmeng Sun, Zeyu Tan, Dianlong You, Zhen Chen | PCNet: A Personalized Complementary Network via Tensor Decomposition for Service Recommendation | 2026 | Vol. 23, Issue | Mashups Tensors Collaborative filtering Web sites Video on demand Artificial intelligence Semantics Reviews Cloud computing Software development management Web service complementarity tensor decomposition personalized recommendation mashup | Web services are widely utilized across domains such as cloud computing, mobile networks, and Web applications. Due to their single-function nature, these services are often composed into Mashups to achieve more comprehensive functionality. However, the rapid growth in the number and variety of Web services has made it increasingly difficult to identify suitable services for Mashup development. Web service recommendation systems have emerged as a solution to this service overload, supporting innovative practices within the service-oriented development paradigm. While existing methods emphasize recommendation accuracy and relevance, few approaches simultaneously consider the personalized requirements of the Mashup side and the complementary relationships on the service side, both of which are essential for reconstructing the Web service ecosystem’s value chain. To address this gap, we propose PCNet, a Personalized Complementary Network for service recommendation based on tensor decomposition. We conceptualize the interaction dynamics between Mashups and services, as well as co-invocation patterns among services, using a three-dimensional tensor. The RESCAL tensor decomposition technique is then applied to jointly learn these relationships and uncover personalized complementary relationships among services. In addition, we develop a complementary perception module that uses an attention mechanism to dynamically model a Mashup’s focus on different complementary relationships, extending them to higher orders. Experimental results on real-world Web service datasets demonstrate that PCNet significantly outperforms state-of-the-art baselines. The implementation of PCNet is publicly available at: https://github.com/MengMeng3399/PCNet | 10.1109/TNSM.2026.3669613 |
| Masaki Oda, Akio Kawabata, Eiji Oki | Consistency-Aware Multi-Server Network Design for Delay-Sensitive Applications Under Server Failures | 2026 | Vol. 23, Issue | Servers Delays Resource management Approximation algorithms Optimization Numerical models Computational modeling Performance analysis Data models Software algorithms Server allocation data consistency preventive start-time optimization server failure approximation algorithm | Real-time applications require low latency and event order guarantees. Distributed server processing is effective for this purpose, and data consistency between servers is crucial. Although existing models in previous work handle data consistency, they do not address server failures. This paper proposes a server allocation model for a consistency-aware multi-server network for delay-sensitive applications with preventive start-time optimization (PSO) under single-server failures. The proposed model considers data consistency between servers and handles single-server failures with PSO. PSO determines the assignment to minimize the worst-case delay over all possible failure scenarios while avoiding service disruption for users connected to non-failed servers. We formulate the proposed model as an integer linear programming (ILP) problem. The decision version of the server allocation problem is proven to be NP-complete, and it becomes difficult to solve in a practical time when the problem size is large. We develop two polynomial-time approximation algorithms with theoretical performance analysis. Numerical results show that the proposed model outperforms start-time optimization in terms of the largest total delay and run-time optimization in terms of avoiding instability. The results also show that the faster of our two developed algorithms achieves a speedup ranging from $2.26 \times 10^{3}$ to $4.37 \times 10^{6}$ times compared to the ILP approach, while the maximum delay is, on average, only 1.029 times the optimal value. The results indicate that the speedup effect becomes more significant as the number of users and servers increases. | 10.1109/TNSM.2026.3669840 |
| Woojin Jeon, Donghyun Yu, Ruei-Hau Hsu, Jemin Lee | Secure Data Sharing Framework With Fine-Grained Access Control and Privacy Protection for IoT Data Marketplace | 2026 | Vol. 23, Issue | Internet of Things Encryption Access control Data privacy Protocols Authentication Protection Vectors Scalability Privacy IoT data marketplace fine-grained access control attributes privacy outsourcing encryption match test | The proliferation of IoT devices has led to an exponential increase in data generation, creating new opportunities for data marketplaces. However, due to the security and privacy issues arising from the sensitive nature of IoT data, as well as the need for efficient management of vast amounts of IoT data, a robust solution is necessary. Therefore, this paper proposes a secure data sharing framework with fine-grained access control and privacy protection for the internet of things (IoT) data marketplace. For fine-grained access control of the data in the proposed protocol, we develop the hidden attributes and encryption outsourced key-policy attribute-based encryption (HAEO-KP-ABE) that outsources high-complex operations to peripheral devices with high capability to reduce the computation burden of IoT device. It achieves data privacy by hiding attributes in the ciphertext and by preventing entities that do not hold the data consumer’s secret key material (including SA/CS) from running the match test on stored ciphertexts before decryption. It also has an efficient match test algorithm which can verify that the hidden attributes of the ciphertext match the access policy of the data consumer’s private key without revealing those attributes. We demonstrate the proposed protocol satisfies the security features required for the data sharing process in an IoT data marketplace environment. Furthermore, we evaluate the execution time of the proposed protocol according to the number of attributes and show the practicality and efficiency of the proposed protocol compared to the related works. | 10.1109/TNSM.2026.3670207 |