Last updated: 2026-02-13 05:01 UTC
All documents
Number of pages: 156
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Jordan F. Masakuna, Djeff K. Nkashama, Arian Soltani, Marc Frappier, Pierre M. Tardif, Froduald Kabanza | Enhancing Anomaly Alert Prioritization through Calibrated Standard Deviation Uncertainty Estimation with an Ensemble of Auto-Encoders | 2026 | Early Access | Uncertainty Standards Measurement Anomaly detection Calibration Bayes methods Predictive models Computer security Reliability Monitoring Auto-Encoders Security Anomaly Detection Alert Prioritization Uncertainty Estimation | Deep auto-encoders (AEs) are widely employed deep learning methods in the field of anomaly detection across diverse domains (e.g., cybersecurity analysts managing large volumes of alerts, or medical practitioners monitoring irregular patient signals). In such contexts, practitioners often face challenges of scale and limited processing resources. To cope, strategies such as false positive reduction, human-in-the-loop review, and alert prioritization are commonly adopted. This paper explores the integration of uncertainty quantification (UQ) methods into alert prioritization for anomaly detection using ensembles of AEs. UQ models highlight doubtful classification decisions, enabling analysts to address the most certain alerts first, since higher certainty typically correlates with greater accuracy. Our study reveals a nuanced issue where applying UQ to ensembles of AEs can produce skewed distributions of large reconstruction errors (errors exceeding a pre-defined threshold), which may falsely suggest high uncertainty when standard deviation is used as the metric. Conventionally, a high standard deviation indicates high uncertainty. However, contrary to intuition, large reconstruction errors often reflect AE is strongly confident that an input is anomalous—not uncertainty about it. Moreover, ensembles of AEs generate reconstruction errors with varying ranges, complicating interpretation. To address this, we propose an extension that calibrates the standard deviation distribution of uncertainties, mitigating erroneous prioritization. Evaluation on 10 benchmark datasets demonstrates that our calibration approach improves the effectiveness of UQ methods in prioritizing alerts, while maintaining favorable trade-offs across other key performance metrics. | 10.1109/TNSM.2026.3664298 |
| Xinshuo Wang, Lei Liu, Baihua Chen, Yifei Li | ENCC: Explicit Notification Congestion Control in RDMA | 2026 | Early Access | Bandwidth Data centers Heuristic algorithms Accuracy Throughput Hardware Switches Internet Convergence Artificial intelligence Congestion Control RDMA Programmable Switch FPGA | Congestion control (CC) is essential for achieving ultra-low latency, high bandwidth, and network stability in high-speed networks. However, modern high-performance RDMA networks, crucial for distributed applications, face significant performance degradation due to limitations of existing CC schemes. Most conventional approaches rely on congestion notification signals that must traverse the queuing data path before congestion signals can be sent back to the sender, causing delayed responses and severe performance collapse. This study proposes Explicit Notification Congestion Control (ENCC), a novel high-speed CC mechanism that achieves low latency, high throughput, and strong network stability. ENCC employs switches to directly notify the sender of precise link load information and avoid notification signal queuing. This allows precise sender-side rate control and queue regulation. ENCC also ensures fairness and easy deployment in hardware. We implement ENCC based on FPGA network interface cards and programmable switches. Evaluation results show that ENCC achieves substantial through-put improvements over representative baseline algorithms, with gains of up to 16.6× in representative scenarios, while incurring minimal additional latency. | 10.1109/TNSM.2026.3656015 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Muhammad Fahimullah, Michel Kieffer, Sylvaine Kerboeuf, Shohreh Ahvar, Maria Trocan | Decentralized Coalition Formation of Infrastructure Providers for Resource Provisioning in Coverage Constrained Virtualized Mobile Networks | 2026 | Early Access | Indium phosphide III-V semiconductor materials Resource management Games Costs Wireless communication Quality of service Collaboration Protocols Performance evaluation Resource provisioning wireless virtualized networks coverage integer linear programming coalition formation hedonic approach | The concept of wireless virtualized networks enables Mobile Virtual Network Operators (MVNOs) to utilize resources made available by multiple Infrastructure Providers (InPs) to set up a service. Nevertheless, existing centralized resource provisioning approaches fail to address such a scenario due to conflicting objectives among InPs and their reluctance to share private information. This paper addresses the problem of resource provisioning from several InPs for services with geographic coverage constraints. When complete information is available, an Integer Linear Program (ILP) formulation is provided, along with a greedy solution. An alternative coalition formation approach is then proposed to build coalitions of InPs that satisfy the constraints imposed by an MVNO, while requiring only limited information sharing. The proposed solution adopts a hedonic game-theoretic approach to coalition formation. For each InP, the decision to join or leave a coalition is made in a decentralized manner, relying on the satisfaction of service requirements and on individual profit. Simulation results demonstrate the applicability and performance of the proposed solution. | 10.1109/TNSM.2026.3663437 |
| Fengqi Li, Yudong Li, Lingshuang Ma, Kaiyang Zhang, Yan Zhang, Chi Lin, Ning Tong | Integrated Cloud-Edge-SAGIN Framework for Multi-UAV Assisted Traffic Offloading Based On Hierarchical Federated Learning | 2026 | Early Access | Resource management Autonomous aerial vehicles Heuristic algorithms Federated learning Internet of Things Dynamic scheduling Vehicle dynamics Atmospheric modeling Accuracy Training SAGIN Hierarchical Federated Learning traffic offloading cloud-edge-end Unmanned Aerial Vehicle | The growing number of mobile devices used by terrestrial users has significantly amplified the traffic load on cellular networks. Especially in urban environments, the high traffic demand brought about by dense user populations has bottlenecked network resources. The Space-Air-Ground-Integrated Network (SAGIN) provides a new solution to cope with this demand, enhancing data transmission efficiency through a multi-layered network structure. However, the heterogeneous and dynamic nature of SAGIN also poses significant management and resource allocation challenges. In this paper, we propose a cloud-edge-SAGIN framework for multi-UAV assisted traffic offloading based on Hierarchical Federated Learning (HFL), aiming to improve the traffic offloading ratio while optimizing the offloading resource allocation. HFL is used instead of traditional Federated Learning (FL) to solve problems such as irrational resource allocation due to heterogeneity in SAGIN. Specifically, the framework applies a hierarchical federated average algorithm and sets a reward function at the ground level, aiming to obtain better model parameters, improve model accuracy at aggregation, enhance UAV traffic offloading ratio, and optimize its scheduling and resource allocation. In addition, an improved Reinforcement Learning (RL) algorithm TD3-A4C is designed in this paper to assist UAVs in realizing intelligent decision-making, reducing communication latency, and further improving resource utilization efficiency. Simulation results demonstrate that the proposed framework and algorithms display superior performance across all dimensions and offer robust support for the comprehensive investigation of intelligent traffic offloading networks. | 10.1109/TNSM.2026.3658833 |
| Zhiwei Yu, Chengze Du, Heng Xu, Ying Zhou, Bo Liu, Jialong Li | REACH: Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks | 2026 | Early Access | Graphics processing units Computational modeling Reliability Processor scheduling Costs Biological system modeling Artificial intelligence Reinforcement learning Transformers Robustness Community GPU platforms Reinforcement learning Task scheduling Distributed AI infrastructure | Community GPU(Graphics Processing Unit) platforms are emerging as a cost-effective and democratized alternative to centralized GPU clusters for AI(Artificial Intelligence) workloads, aggregating idle consumer GPUs from globally distributed and heterogeneous environments. However, their extreme hardware/software diversity, volatile availability, and variable network conditions render traditional schedulers ineffective, leading to suboptimal task completion. In this work, we present REACH (Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks), a Transformer-based reinforcement learning framework that redefines task scheduling as a sequence scoring problem to balance performance, reliability, cost, and network efficiency. By modeling both global GPU states and task requirements, REACH learns to adaptively co-locate computation with data, prioritize critical jobs, and mitigate the impact of unreliable resources. Extensive simulation results show that REACH improves task completion rates by up to 17%, more than doubles the success rate for high-priority tasks, and reduces bandwidth penalties by over 80% compared to state-of-the-art baselines. Stress tests further demonstrate its robustness to GPU churn and network congestion, while scalability experiments confirm its effectiveness in large-scale, high-contention scenarios. | 10.1109/TNSM.2026.3663316 |
| Abdinasir Hirsi, Mohammed A. Alhartomi, Lukman Audah, Mustafa Maad Hamdi, Adeb Salah, Godwin Okon Ansa, Salman Ahmed, Abdullahi Farah | Hybrid CNN-LSTM Model for DDoS Detection and Mitigation in Software-Defined Networks | 2026 | Early Access | Prevention and mitigation Denial-of-service attack Feature extraction Electronic mail Computer crime Accuracy Security Deep learning Convolutional neural networks Real-time systems CNN-LSTM Deep Learning DDoS attack Machine Learning Network Security SDN security SDN Vulnerabilities | Software-Defined Networking (SDN) enhances programmability and control but remains highly vulnerable to distributed denial-of-service (DDoS) attacks. Existing solutions often adapt conventional methods without leveraging SDN’s native features or addressing real-time mitigation. This study introduces a novel hybrid deep learning framework for DDoS detection and mitigation in SDN, significantly advancing the state of the art. We develop a custom dataset in a Mininet–Ryu testbed that reflects realistic SDN traffic conditions, and employ a multistage feature selection pipeline to reduce redundancy and highlight the most discriminative flow attributes. A hybrid Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) model is then applied, capturing both spatial and temporal traffic patterns. The proposed system achieves 99.5% accuracy and a 97.7% F1-score, demonstrating a significant improvement over baseline ML and DL approaches. In addition, a lightweight and scalable mitigation module embedded in the SDN controller dynamically drops or reroutes malicious flows, enabling real-time, low-latency responsiveness. Experimental results across diverse topologies confirm the framework’s scalability and applicability in real-world SDN environments. | 10.1109/TNSM.2026.3662819 |
| Deemah H. Tashman, Soumaya Cherkaoui | Trustworthy AI-Driven Dynamic Hybrid RIS: Joint Optimization and Reward Poisoning-Resilient Control in Cognitive MISO Networks | 2026 | Early Access | Reconfigurable intelligent surfaces Reliability Optimization Security MISO Array signal processing Vectors Satellites Reflection Interference Beamforming cascaded channels cognitive radio networks deep reinforcement learning dynamic hybrid reconfigurable intelligent surfaces energy harvesting poisoning attacks | Cognitive radio networks (CRNs) are a key mechanism for alleviating spectrum scarcity by enabling secondary users (SUs) to opportunistically access licensed frequency bands without harmful interference to primary users (PUs). To address unreliable direct SU links and energy constraints common in next-generation wireless networks, this work introduces an adaptive, energy-aware hybrid reconfigurable intelligent surface (RIS) for underlay multiple-input single-output (MISO) CRNs. Distinct from prior approaches relying on static RIS architectures, our proposed RIS dynamically alternates between passive and active operation modes in real time according to harvested energy availability. We also model our scenario under practical hardware impairments and cascaded fading channels. We formulate and solve a joint transmit beamforming and RIS phase optimization problem via the soft actor-critic (SAC) deep reinforcement learning (DRL) method, leveraging its robustness in continuous and highly dynamic environments. Notably, we conduct the first systematic study of reward poisoning attacks on DRL agents in RIS-enhanced CRNs, and propose a lightweight, real-time defense based on reward clipping and statistical anomaly filtering. Numerical results demonstrate that the SAC-based approach consistently outperforms established DRL base-lines, and that the dynamic hybrid RIS strikes a superior trade-off between throughput and energy consumption compared to fully passive and fully active alternatives. We further show the effectiveness of our defense in maintaining SU performance even under adversarial conditions. Our results advance the practical and secure deployment of RIS-assisted CRNs, and highlight crucial design insights for energy-constrained wireless systems. | 10.1109/TNSM.2026.3660728 |
| Liang Kou, Xiaochen Pan, Guozhong Dong, Meiyu Wang, Chunyu Miao, Jilin Zhang, Pingxia Duan | Dynamic Adaptive Aggregation and Feature Pyramid Network Enhanced GraphSAGE for Advanced Persistent Threat Detection in Next-Generation Communication Networks | 2026 | Early Access | Feature extraction Adaptation models Computational modeling Artificial intelligence Semantics Topology Next generation networking Adaptive systems Dynamic scheduling Data models GraphSAGE Dynamic Graph Attention Mechanism Multi-Scale Feature Pyramid Advanced Persistent Threat Next-Generation Communication Networks | Advanced Persistent Threats (APTs) pose severe challenges to Next-Generation Communication Networks (NGCNs) due to their stealthiness and NGCNs’ dynamic topology, while conventional GNN-based intrusion detection systems suffer from static aggregation and poor adaptability to unseen nodes. To address these issues, this paper proposes DAA-FPN-SAGE, a lightweight graph-based detection framework integrating Dynamic Adaptive Aggregation (DAA) and Multi-Scale Feature Pyramid Network (MSFPM). Leveraging GraphSAGE’s inductive learning capability, the framework effectively models unseen nodes or subgraphs and adapts to NGCN’s dynamic changes (e.g., elastic network slicing, online AI model updates)—a key advantage for handling NGCN’s real-time topological variations. The DAA module employs multi-hop attention to dynamically assign weights to neighbors at different hop distances, enhancing capture of hierarchical dependencies in multi-stage APT attack chains. The MSFPM module fuses local-global structural information via a gated feature selection mechanism, resolving dimensional inconsistency and enriching attack behavior representation. Extensive experiments on StreamSpot, Unicorn, and DARPA TC#3 datasets demonstrate superior performance, meeting detection requirements of large-scale NGCNs. | 10.1109/TNSM.2026.3660650 |
| Mohammad Amir Dastgheib, Hamzeh Beyranvand, Jawad A. Salehi | Shannon Entropy for Load-Balanced Cellular Network Planning: Data-Driven Voronoi Optimization of Base-Station Locations | 2026 | Early Access | Shape Entropy Costs Cost function Planning Measurement Load management Cellular networks Uncertainty Telecommunications Network planning Base-station placement Shannon entropy Machine learning Stochastic shape optimization Nearest neighbor methods Facility location | In this paper, we introduce a stochastic shape optimization technique for base-station placement in cellular wireless communication networks. We formulate the data-driven facility location problem in a gradient-based framework and propose an algorithm that computes stochastic gradients efficiently via nearest-neighbor evaluations on Voronoi diagrams. This enables the use of Shannon-entropy objectives that promote balanced coverage and yield more than two orders of magnitude reduction in per-iteration runtime compared to a conventional integral-based optimization that assumes full knowledge of the under-lying density, making the proposed approach practical for real deployments. We highlight the requirements of facility location balancing problems with the introduction of the Adjusted Entropy Ratio and show a significant improvement in load balancing compared to the baseline algorithms, particularly in scenarios where baseline algorithms fall short in subdividing crowded areas for more equitable coverage. A downlink telecom evaluation with realistic propagation and interference models further shows that the proposed method configuration substantially improves user-rate fairness and load balance. Our results also show that Self-Organizing Maps (SOMs) provide an effective initialization by capturing the structure of the users’ location data. | 10.1109/TNSM.2026.3663045 |
| Rajasekhar Dasari, Sanjeet Kumar Nayak | PR-Fog: An Efficient Task Priority-based Reliable Provisioning of Resources in Fog-Enabled IoT Networks | 2026 | Early Access | Reliability Internet of Things Costs Energy consumption Cloud computing Edge computing Quality of service Energy efficiency Analytical models Resource management Internet of Things (IoT) Fog Computing Energy Latency Task Priority Reliability Analytical Modeling | As the demand for real-time data processing grows, fog computing emerges as an alternative to cloud computing, which brings computation and storage closer to IoT devices. In Fog-enabled IoT networks, provisioning of fog nodes for task processing must consider factors, such as latency, energy consumption, cost, and reliability. This paper presents PR-Fog, a scheme for optimizing the provisioning of heterogeneous fog nodes in fog-enabled IoT networks, considering parameters such as task priority, energy efficiency, cost efficiency, and reliability. At first, we create an analytical framework using M/M/1/C priority queuing system to assess the reliability of these heterogeneous fog nodes. Building on this analysis, we propose an algorithm that determines the optimal number of reliable fog nodes while satisfying latency, energy, and cost constraints. Extensive simulations show significant enhancements in key performance metrics when comparing PR-Fog to existing schemes, including a 36% decrease in response time and an 8% improvement in satisfaction ratio, resulting in minimized 23% fog node provisioning costs. Additionally, PR-Fog’s effectiveness is validated through real testbed experiments. | 10.1109/TNSM.2026.3661745 |
| Hojjat Navidan, Cristian Martín, Vasilis Maglogiannis, Dries Naudts, Manuel Díaz, Ingrid Moerman, Adnan Shahid | An End-to-End Digital Twin Framework for Dynamic Traffic Analytics in O-RAN | 2026 | Vol. 23, Issue | Open RAN Adaptation models Real-time systems Biological system modeling 5G mobile communication Predictive models Traffic control Incremental learning Anomaly detection Data models Digital twin generative AI open radio access networks incremental learning traffic analytics traffic prediction anomaly detection | Dynamic traffic patterns and shifts in traffic distribution in Open Radio Access Networks (O-RAN) pose a significant challenge for real-time network optimization in 5G and beyond. Traditional traffic analytics methods struggle to remain accurate under such non-stationary conditions, where models trained on historical data quickly degrade as traffic evolves. This paper introduces AIDITA, an AI-driven Digital Twin for Traffic Analytics framework designed to solve this problem through autonomous model adaptation. AIDITA creates a digital replica of the live analytics models running in the RAN Intelligent Controller (RIC) and continuously updates them within the digital twin using incremental learning. These updates use real-time Key Performance Metrics (KPMs) from the live network, augmented with synthetic data from a Generative AI (GenAI) component to simulate diverse network scenarios. Combining GenAI-driven augmentation with incremental learning enables traffic analytics models, such as prediction or anomaly detection, to adapt continuously without the need for full retraining, preserving accuracy and efficiency in dynamic environments. Implemented and validated on a real-world 5G testbed, our AIDITA framework demonstrates significant improvements in traffic prediction and anomaly detection use cases under distribution shifts, showcasing its practical effectiveness and adaptability for real-time network optimization in O-RAN deployments. | 10.1109/TNSM.2025.3628756 |
| Sheng-Wei Wang, Show-Shiow Tzeng | Accurate Estimation of Selfish Mining Rate by Stale Block Ratio in a Proof-of-Work Blockchain | 2026 | Vol. 23, Issue | Blockchains Estimation Data mining Analytical models Accuracy Simulation Mathematical models Computational modeling Proof of Work Steady-state Blockchain proof-of-work selfish mining stale block ratio Markov chain | Selfish mining presents a significant threat to Proof-of-Work blockchains by compromising fairness among miners and wasting computational resources through the generation of stale blocks. Detecting and mitigating selfish mining attacks is crucial to maintain blockchain security and efficiency. While the presence of stale blocks is commonly used as an indicator of selfish mining, merely identifying the attacker’s existence is insufficient. A more valuable goal is to estimate the extent of the attack, specifically the selfish mining rate. In this paper, we propose two analytical models that can precisely calculate stale block ratios in blockchains with one or two selfish miners. These models provide closed-form functions that compute stale block ratios given selfish mining rates. We then derive inverse functions to estimate the selfish mining rate based on observed stale block ratios. Simulation results demonstrate that our analytical models can effectively calculate stale block ratios, with an average discrepancy of only 3.12% compared to simulation results. Furthermore, our estimation approach accurately predicts selfish mining rates, with a mean error of 2.82%. Estimating the selfish mining rate with high accuracy enables better identification of malicious attacks, enhances fairness, optimizes resource allocation, and supports the development of more robust security mechanisms in blockchain networks. | 10.1109/TNSM.2025.3631463 |
| Kaiyi Zhang, Changgang Zheng, Nancy Samaan, Ahmed Karmouch, Noa Zilberman | Design, Implementation, and Deployment of Multi-Task Neural Networks in Programmable Data-Planes | 2026 | Vol. 23, Issue | Multitasking Artificial neural networks Pipelines Hardware Accuracy Trees (botanical) Software Computational modeling Scalability Machine learning In-network computing P4 multi-task learning neural networks programmable data-planes | The increasing demand for real-time inference on high-volume network traffic has led to the rise of in-network machine learning, where programmable switches execute various models directly in the data-plane at line rate. Effective network management often involves multiple prediction tasks, such as predicting bit rate, flow size, or traffic class; however, existing solutions deploy separate models for each task, placing a significant burden on the data-plane and leading to substantial resource consumption when deploying multiple tasks. To address this limitation, we introduce MUTA, a novel in-network multi-task learning framework that enables concurrent inference of multiple tasks in the data-plane, without exhausting available resources. MUTA builds a multi-task neural network to share feature representations across tasks and introduces a data-plane mapping methodology to fit it within network switches. Additionally, MUTA enhances scalability by supporting distributed deployment, where different layers of a multi-task model can be offloaded across multiple switches. An orchestrator employs multi-objective optimization to determine optimal model placement in multi-path networks. MUTA is deployed on P4 hardware switches, and is shown to reduce memory requirements by $\times 10.5$ , while at the same time improving accuracy by up to 9.14% using limited training data, compared with state-of-the-art single-task learning solutions. | 10.1109/TNSM.2025.3629642 |
| Mohammed Dhiya Eddine Gouaouri, Sihem Ouahouah, Miloud Bagaa, Messaoud Ahmed Ouameur, Adlen Ksentini | A Multi-Objective Framework for Power-Aware Scheduling in Kubernetes | 2026 | Vol. 23, Issue | Containers Processor scheduling Optimization Power demand Load management Resource management Load modeling Job shop scheduling Cloud computing Dynamic scheduling Scheduling power-aware scheduling multi-objective optimization Kubernetes NSGA-II TOPSIS | Efficient workload scheduling in Kubernetes is crucial for optimizing energy consumption and resource utilization in large-scale and heterogeneous clusters. However, existing Kubernetes schedulers either ignore power-awareness or rely on simplified, static power models, which limit their effectiveness in managing energy efficiency under dynamic workloads. To address these shortcomings, we present a multi-objective scheduling framework for online Kubernetes pod placement that jointly considers power consumption, resource utilization, and load balancing. The framework follows a two-stage design: (i) a node power–profiling component trains a machine–learning model from real power measurements to predict per-node consumption under varying utilizations; and (ii) an online scheduler uses these predictions within a multi-objective optimization formulation. We implement scheduling optimization using two algorithms, TOPSIS and NSGA-II, adapting them to the Kubernetes context, and also propose a distributed variant of the NSGA-II algorithm that parallelizes fitness evaluation with controlled migration between workers. Experimental results show that the proposed framework outperforms baseline schedulers, achieving a 40% reduction in power consumption and improvements of 74% and 68% in CPU and memory utilization, respectively, while sustaining scalability under high workloads. To the best of our knowledge, this is the first work to integrate learned power models and distributed multi-objective optimization into Kubernetes for power-aware pod scheduling. | 10.1109/TNSM.2025.3630045 |
| Fabian Poignée, Anika Seufert, Frank Loh, Michael Seufert, Tobias Hoßfeld | Modeling Network Load of Mobile Instant Messaging: A Modular Source Traffic Generator | 2026 | Vol. 23, Issue | Media Load modeling Telecommunication traffic Communication networks Videos Internet telephony Freeware Image coding Instant messaging Social networking (online) Mobile instant messaging traffic modeling message generation contact network traffic measurement | Mobile Instant Messaging (MIM) applications such as WhatsApp transformed human communication by enabling global exchange of various message types, such as text, image, video, or voice, at any time. Network providers are facing a substantial user base and network load which is especially high in group chats where each message needs to be distributed to each member. Due to end-to-end encryption, network operators must obtain knowledge about the communication and the resulting load on the network by other means, which makes it necessary to model the network traffic of MIM. In this work, we therefore present an approach to source traffic modeling for MIM. After identifying the building blocks of a Source Traffic Model (STM) for MIM, we address existing gaps through studies on MIM communication networks, contact proximity, media compression and payload size, as well as media file size distribution. Combining existing literature and our work, we present and implement a modular STM approach which can be used for developing STMs for MIM. Using an exemplary STM, we evaluate the daily network traffic per user. With this, we provide a comprehensive description of MIM in the network researching context and enable consideration of MIM in future network design. | 10.1109/TNSM.2025.3630052 |
| Muhammad Ashar Tariq, Malik Muhammad Saad, Dongkyun Kim | DDPG-Based Resource Management in Network Slicing for 5G-Advanced V2X Services | 2026 | Vol. 23, Issue | Resource management Quality of service Network slicing Real-time systems 3GPP 5G mobile communication Vehicle dynamics Vehicle-to-everything Ultra reliable low latency communication Standards Network slicing resource allocation real-time resource management 5G 5G-advanced V2X DDPG DRL | The evolution of 5G technology towards 5G-Advanced has introduced advanced vehicular applications with stringent Quality-of-Service (QoS) requirements. Addressing these demands necessitates intelligent resource management within the standard 3GPP network slicing framework. This paper proposes a novel resource management scheme leveraging a Deep Deterministic Policy Gradient (DDPG) algorithm implemented in the Network Slice Subnet Management Function (NSSMF). The scheme dynamically allocates resources to network slices based on real-time traffic demands while maintaining compatibility with existing infrastructure, ensuring cost-effectiveness. The proposed framework features a two-level architecture: the gNodeB optimizes slice-level resource allocation at the upper level, and vehicles reserve resources dynamically at the lower level using the 3GPP Semi-Persistent Scheduling (SPS) mechanism. Evaluation in a realistic, trace-based vehicular environment demonstrates the scheme’s superiority over traditional approaches, achieving higher Packet Delivery Ratio (PDR), improved Spectral Efficiency (SE), and adaptability under varying vehicular densities. These results underscore the potential of the proposed solution in meeting the QoS demands of critical 5G-Advanced vehicular applications. | 10.1109/TNSM.2025.3629529 |
| Monolina Dutta, Anoop Thomas, B. Sundar Rajan | Novel Delivery Algorithms for Decentralized Multi-Access Coded Caching Systems | 2026 | Vol. 23, Issue | Prefetching Servers Indexes Encoding Topology Content distribution networks Cache memory Vectors Numerical models Network topology Coded caching content delivery networks decentralized caching index coding multi-access coded caching | In this paper, we propose a multi-access coded caching system under decentralized setting tailored for Content Delivery Networks (CDNs). In this system, a central server hosts N files, each of size F bits, and serves $K\leq N$ users through a shared link. The network is equipped with c caches, each with a capacity of MF bits, distributed across the network, where each of the K users is connected to a random set of $r\leq c$ caches. Initially, we consider a model where each cache subset is accessed by an equal number of users. We introduce a novel content delivery algorithm for the central server, which allows us to derive a closed-form expression for the per user transmission rate. Using techniques from index coding, we prove the optimality of the proposed delivery scheme. Additionally, we extend the model to propose a more general and novel framework by allowing each subset of caches to serve an arbitrary number of users, thereby greatly enhancing the system’s flexibility and applicability. We also propose a new delivery algorithm tailored to this generalized setting and demonstrate its optimality under specific user-to-cache association scenarios. Numerical results demonstrate that, in a specific scenario where the user-to-cache associations do not satisfy the optimality conditions, the proposed generalized scheme shows improvement over the order-optimal state-of-the-art decentralized multi-access coded caching scheme for small cache sizes. Specifically, when approximately 25% of the content is stored at every cache, the proposed scheme achieves up to a 20% reduction in the per user transmission rate. Considering that both schemes serve an equal number of users, the observed improvements indicate a potential reduction in server bandwidth requirements, lower latency, and enhanced energy efficiency during content delivery. | 10.1109/TNSM.2025.3629715 |
| Haftay Gebreslasie Abreha, Ilora Maity, Youssouf Drif, Christos Politis, Symeon Chatzinotas | Revenue-Aware Seamless Content Distribution in Satellite-Terrestrial Integrated Networks | 2026 | Vol. 23, Issue | Satellites Topology User experience Network topology Delays Real-time systems Optimization Low earth orbit satellites Collaboration Servers Satellite edge computing (SEC) content caching content distribution dynamic ad insertion | With the surging demand for data-intensive applications, ensuring seamless content delivery in Satellite-Terrestrial Integrated Networks (STINs) is crucial, especially for remote users. Dynamic Ad Insertion (DAI) enhances monetization and user experience, while Mobile Edge Computing (MEC) in STINs enables distributed content caching and ad insertion. However, satellite mobility and time-varying topologies cause service disruptions, while excessive or poorly placed ads risk user disengagement, impacting revenue. This paper proposes a novel framework that jointly addresses three challenges: (i) service continuity- and topology-aware content caching to adapt to STIN dynamics, (ii) Distributed DAI (D-DAI) that minimizes feeder link load and storage overhead by avoiding redundant ad-variant content storage through distributed ad stitching, and (iii) revenue-aware content distribution that explicitly models user disengagement due to ad overload to balance monetization and user satisfaction. We formulate the problem as two hierarchical Integer Linear Programming (ILP) optimizations: one content caching that aims to maximize cache hit rate and another optimizing content distribution with DAI to maximize revenue, minimize end-user costs, and enhance user experience. We develop greedy algorithms for fast initialization and a Binary Particle Swarm Optimization (BPSO)–based strategy for enhanced performance. Simulation results demonstrate that the proposed approach achieves over a 4.5% increase in revenue and reduces cache retrieval delay by more than 39% compared to the benchmark algorithms. | 10.1109/TNSM.2025.3629810 |
| Neco Villegas, Ana Larrañaga, Luis Diez, Katerina Koutlia, Sandra Lagén, Ramón Agüero | Optimizing QoS MAC Scheduling in 5G NR: A Lyapunov Approach Evaluated With XR Traffic | 2026 | Vol. 23, Issue | Quality of service 5G mobile communication Bit rate Resource management OFDM Long Term Evolution Radio spectrum management Performance evaluation Frequency conversion Downlink 5G MAC scheduler QoS Lyapunov XR ns-3 | 5th Generation (5G) and beyond technologies are designed to convey multiple Quality of Service (QoS) requirements coming from highly heterogeneous services. In this regard, 5G extends the QoS definition of previous technologies. In order to fulfill the requirements of new services, it is necessary to develop resource management solutions that effectively and efficiently translate them to usage of resources. Therefore, MAC schedulers are to be designed to accommodate the demands of emerging applications with stringent QoS requirements, such as eXtended Reality (XR). In this work, we introduce a Lyapunov-based Medium Access Control (MAC) scheduler designed to appropriately tackle coexisting heterogeneous QoS flows. The proposed approach allows the implementation of policies that explicitly consider the QoS requirements, as well as an efficient use of allocated resources. The results obtained through extensive experiments over the ns-3 5G-LENA simulator demonstrate that our scheduler guarantees the required time-average bit rate for every QoS flow at every time window, while ensuring the stability of traffic queues for all users. | 10.1109/TNSM.2025.3631520 |