Last updated: 2026-01-26 05:01 UTC
All documents
Number of pages: 155
| Author(s) | Title | Year | Publication | Keywords | ||
|---|---|---|---|---|---|---|
| Zhenyang Guo, Jin Cao, XiongPeng Ren, Yuchen Zhou, Lifu Cheng, Peijie Yin, Hui Li | LDST-UAVS: A Lightweight Data Secure Transmission Protocol for Unmanned Aerial Vehicle Swarms in Emergency Rescue Scenarios | 2026 | Early Access | Autonomous aerial vehicles Security Protocols Authentication Spread spectrum communication Data communication Disasters Base stations Real-time systems Floods UAV Data Secure Transmission Traceability | Currently, Unmanned Aerial Vehicles (UAV) groups can quickly build a multi-hop transmission network, which have been widely utilized in emergency communication scenarios to perform search and rescue, environmental monitoring, personnel positioning, rapid networking, etc. In such emergency rescue situations, strict demands on real-time communication, security, and minimal resource consumption become paramount. Higher requirements for security, bandwidth, and real-time performance necessitate a secure and lightweight data transmission protocol. Additionally, due to the lack of personnel supervision in these scenarios, the probability of malicious nodes increases. Therefore, it is essential to quickly and proximally block malicious nodes’ data to prevent it from affecting subsequent network propagation, and to accurately identify the malicious nodes. To address these issues, in this paper, we propose a traceable, lightweight, and secure data transmission protocol for UAV multi-hop networks in emergency rescue scenarios. The proposed protocol can verify the integrity of data transmitted by a large number of nodes in real time, detect erroneous transmissions, and trace malicious users. Experimental results show that our protocol consistently outperforms the comparison schemes in terms of computational overhead. Moreover, in scenarios involving smaller groups (m=5) and fewer hops (n=4), it exhibits significantly lower communication bandwidth overhead than the reference methods. Security analysis using BAN logic and the formal verification tool Scyther indicates that the proposed scheme meets security requirements. Additionally, comparative analysis results demonstrate that the proposed scheme is highly effective and outperforms other related schemes under the unique constraints of emergency rescue scenarios, where rapid, secure decision-making and data transmission are critical. | 10.1109/TNSM.2026.3656973 |
| Yilu Chen, Ye Wang, Ruonan Li, Yujia Xiao, Lichen Liu, Jinlong Li, Yan Jia, Zhaoquan Gu | TrafficAudio: Audio Representation for Lightweight Encrypted Traffic Classification in IoT | 2026 | Early Access | Feature extraction Cryptography Telecommunication traffic Accuracy Malware Vectors Spatiotemporal phenomena Security Intrusion detection Computational efficiency Encrypted traffic classification Malicious traffic detection Mel-frequency cepstral coefficients Traffic representation | Encrypted traffic classification has become a crucial task for network management and security with the widespread adoption of encrypted protocols across the Internet and the Internet of Things. However, existing methods often rely on discrete representations and complex models, which leads to incomplete feature extraction, limited fine-grained classification accuracy, and high computational costs. To this end, we propose TrafficAudio, a novel encrypted traffic classification method based on audio representation. TrafficAudio comprises three modules: audio representation generation (ARG), audio feature extraction (AFE), and spatiotemporal traffic classification (STC). Specifically, the ARG module first represents raw network traffic as audio to preserve temporal continuity of traffic. Then, the audio is processed by the AFE module to compute low-dimensional Mel-frequency cepstral coefficients (MFCC), encoding both temporal and spectral characteristics. Finally, spatiotemporal features are extracted from MFCC through a parallel architecture of one-dimensional convolutional neural network and bidirectional gated recurrent unit layers, enabling fine-grained traffic classification. Experiments on five public datasets across six classification tasks demonstrate that TrafficAudio consistently outperforms ten state-of-the-art baselines, achieving accuracies of 99.74%, 98.40%, 99.76%, 99.25%, 99.77%, and 99.74%. Furthermore, TrafficAudio significantly reduces computational complexity, achieving reductions of 86.88% in floating-point operations and 43.15% of model parameters over the best-performing baseline. | 10.1109/TNSM.2026.3651599 |
| Jack Wilkie, Hanan Hindy, Craig Michie, Christos Tachtatzis, James Irvine, Robert Atkinson | A Novel Contrastive Loss for Zero-Day Network Intrusion Detection | 2026 | Early Access | Contrastive learning Anomaly detection Training Autoencoders Training data Detectors Data models Vectors Telecommunication traffic Network intrusion detection Internet of Things Network Intrusion Detection Machine Learning Contrastive Learning | Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class— a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes— both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e. other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of.000065 and.060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of.170883 over existing approaches.The implementation and experiments are open-sourced and available at: https://github.com/jackwilkie/CLOSR | 10.1109/TNSM.2026.3652529 |
| Fekri Saleh, Abraham O. Fapojuwo, Diwakar Krishnamurthy | vEdge: Flow-based Network Slicing for Smart Cities in Edge Cloud Environments | 2026 | Early Access | Smart city applications require diverse fifth generation network services with stringent performance and isolation requirements, necessitating scalable and efficient network slicing mechanisms. This paper proposes a novel framework for flow-based network slicing in edge cloud environments, termed virtual edge (vEdge). The framework leverages virtual medium access control addresses to identify flows at the data link layer (Layer 2), achieving robust flow-based slice isolation and efficient resource management. The proposed solution integrates a vEdge software module within the software defined networking controller to create, manage, and isolate network slices for both Third Generation Partnership Project (3GPP) and non-3GPP devices. By isolating traffic at Layer 2, the framework simplifies address matching and eliminates the computational overhead associated with deep packet inspection at upper layers (e.g., Layer 3/4 or Layer 7). The proposed vEdge further provides customizable flow-based network slices, each managed by a dedicated controller, providing self-contained virtual networks tailored to diverse applications within the smart city sector. Experimental evaluations demonstrate the efficacy of vEdge in enhancing network performance, achieving a 30% reduction in latency compared to flow-based network slicing that uses non-Layer 2 parameters to identify flows. | 10.1109/TNSM.2026.3656925 | |
| Suyong Eum, Shin’ichi Arakawa, Masayuki Murata | Deterministic and Probabilistic Scheduling for Latency Guarantees in B5G/6G Network Management | 2026 | Early Access | In the era of Beyond 5G (B5G) and 6G networks, ensuring efficient resource management and meeting stringent quality of service (QoS) requirements are crucial. This paper proposes the Deterministic and Probabilistic Scheduling for Latency Guarantees (DPSLG) algorithm, which provides Worst-Case Delay (WCD) guarantees, both deterministically and probabilistically, to support Ultra-Reliable Low-Latency Communication (URLLC) applications. Deterministic guarantees ensure strict delay bounds for mission-critical scenarios, while probabilistic guarantees offer flexibility by accommodating dynamic traffic conditions with controlled threshold violations. The proposed algorithm leverages the Lyapunov optimization framework for deterministic delay bounds in dynamic environments and integrates Extended Conformal Quantile Regression (ECQR) to enable probabilistic guarantees. This combination enhances reliability and adaptability under diverse traffic conditions. Furthermore, constraint mechanisms are incorporated to mitigate the impact of misbehaving users and improve overall system performance. This work significantly advances the management of radio resources in B5G and 6G networks by addressing key challenges related to latency and efficiency. It establishes a robust framework for optimizing scheduling mechanisms, paving the way for future innovations in managing next-generation networks to meet stringent performance and reliability demands. | 10.1109/TNSM.2026.3657735 | |
| Xiujun Xu, Qi Wang, Qingshan Wang, Yinlong Xu | Contract-Based Incentive Mechanism for Long-term Participation in Federated Learning | 2026 | Early Access | Federated learning (FL), as a newly-developing technique, brings the advantage of organizing multiple participants to learn together, while avoiding the leakage of their privacy information. Contract theory provides an effective incentive mechanism to encourage participants to participate in FL. Existing contract-based incentive mechanisms consider participants’ types but ignore the different contributions of participants within the same type during the training.This paper first introduces a metric, reputation, to evaluate the contribution of participants in each iteration, and then proposes a hybrid contract mechanism consisting of a short-term contract and a long-term contract. Only the participants with reputations higher than a pre-defined threshold can sign the long-term contract. We formulate the solution of the long-term contract mechanism as an optimization problem with constraints. We further simplify the constraints of the long-term contract optimization problem, and theoretically analyze the correctness of the simplification to greatly reduce its computational complexity. We prove that the model owner achieves more profit with the hybrid contract mechanism. Simulations with the MNIST dataset show that the long-term contract improves the model accuracy by at least 5% compared with the existing contracts. Furthermore, compared with the short-term contract, participants signing the long-term contract are granted more rewards. | 10.1109/TNSM.2026.3657419 | |
| Shagufta Henna, Upaka Rathnayake | Hypergraph Representation Learning-Based xApp for Traffic Steering in 6G O-RAN Closed-Loop Control | 2026 | Early Access | Open RAN Resource management Ultra reliable low latency communication Throughput Heuristic algorithms Computer architecture Accuracy 6G mobile communication Seals Real-time systems Open Radio Access Network (O-RAN) Intelligent Traffic Steering Link Prediction for Traffic Management | This paper addresses the challenges in resource allocation within disaggregated Radio Access Networks (RAN), particularly when dealing with Ultra-Reliable Low-Latency Communications (uRLLC), enhanced Mobile Broadband (eMBB), and Massive Machine-Type Communications (mMTC). Traditional traffic steering methods often overlook individual user demands and dynamic network conditions, while multi-connectivity further complicates resource management. To improve traffic steering, we introduce Tri-GNN-Sketch, a novel graph-based deep learning approach employing Tri-subgraph sampling to enhance link prediction in Open RAN (O-RAN) environments. Link prediction refers to accurately forecasting optimal connections between users and network resources using current and historical measurements. Tri-GNN-Sketch is trained on real-world 4G/5G RAN monitoring data. The model demonstrates robust performance across multiple metrics, including precision, recall, F1 score, and ROC-AUC, effectively modeling interfering nodes for accurate traffic steering. We further propose Tri-HyperGNN-Sketch, which extends the approach to hypergraph modeling, capturing higher-order multi-node relationships. Using link-level simulations based on Channel Quality Indicator (CQI)-to-modulation mappings and LTE transport block size specifications, we evaluate throughput and packet delay for Tri-HyperGNN-Sketch. Tri-HyperGNN-Sketch achieves an exceptional link prediction accuracy of 99.99% and improved network-level performance, including higher effective throughput and lower packet delay compared to Tri-GNN-Sketch (95.1%) and other hypergraph-based models such as HyperSAGE (91.6%) and HyperGCN (92.31%) for traffic steering in complex O-RAN deployments. | 10.1109/TNSM.2026.3654534 |
| Apurba Adhikary, Avi Deb Raha, Yu Qiao, Md. Shirajum Munir, Mrityunjoy Gain, Zhu Han, Choong Seon Hong | Age of Sensing Empowered Holographic ISAC Framework for NextG Wireless Networks: A VAE and DRL Approach | 2026 | Early Access | Array signal processing Resource management Integrated sensing and communication Wireless networks Phased arrays Hardware Arrays Real-time systems Metamaterials 6G mobile communication Integrated sensing and communication age of sensing holographic MIMO deep reinforcement learning artificial intelligence framework | This paper proposes an AI framework that leverages integrated sensing and communication (ISAC), aided by the age of sensing (AoS) to ensure the timely location updates of the users for a holographic MIMO (HMIMO)-assisted base station (BS)-enabled wireless network. The AI-driven framework aims to achieve optimized power allocation for efficient beamforming by activating the minimal number of grids from the HMIMO BS for serving the users. An optimization problem is formulated to maximize the sensing utility function, aiming to maximize the communication signal-to-interference-plus-noise ratio (SINRc) of the received signals and beam-pattern gains to improve the sensing SINR of reflected echo signals, which in turn maximizes the achievable rate of users. A novel AI-driven framework is presented to tackle the formulated NP-hard problem that divides it into two problems: a sensing problem and a power allocation problem. The sensing problem is solved by employing a variational autoencoder (VAE)-based mechanism that obtains the sensing information leveraging AoS, which is used for the location update. Subsequently, a deep deterministic policy gradient-based deep reinforcement learning scheme is devised to allocate the desired power by activating the required grids based on the sensing information achieved with the VAE-based mechanism. Simulation results demonstrate the superior performance of the proposed AI framework compared to advantage actor-critic and deep Q-network-based methods, achieving a cumulative average SINRc improvement of 8.5 dB and 10.27 dB, and a cumulative average achievable rate improvement of 21.59 bps/Hz and 4.22 bps/Hz, respectively. Therefore, our proposed AI-driven framework guarantees efficient power allocation for holographic beamforming through ISAC schemes leveraging AoS. | 10.1109/TNSM.2026.3654889 |
| Jing Zhang, Chao Luo, Rui Shao | MTG-GAN: A Masked Temporal Graph Generative Adversarial Network for Cross-Domain System Log Anomaly Detection | 2026 | Early Access | Anomaly detection Adaptation models Generative adversarial networks Feature extraction Data models Load modeling Accuracy Robustness Contrastive learning Chaos Log Anomaly Detection Generative Adversarial Networks (GANs) Temporal Data Analysis | Anomaly detection of system logs is crucial for the service management of large-scale information systems. Nowadays, log anomaly detection faces two main challenges: 1) capturing evolving temporal dependencies between log events to adaptively tackle with emerging anomaly patterns, 2) and maintaining high detection capabilities across varies data distributions. Existing methods rely heavily on domain-specific data features, making it challenging to handle the heterogeneity and temporal dynamics of log data. This limitation restricts the deployment of anomaly detection systems in practical environments. In this article, a novel framework, Masked Temporal Graph Generative Adversarial Network (MTG-GAN), is proposed for both conventional and cross-domain log anomaly detection. The model enhances the detection capability for emerging abnormal patterns in system log data by introducing an adaptive masking mechanism that combines generative adversarial networks with graph contrastive learning. Additionally, MTG-GAN reduces dependency on specific data distribution and improves model generalization by using diffused graph adjacency information deriving from temporal relevance of event sequence, which can be conducive to improve cross-domain detection performance. Experimental results demonstrate that MTG-GAN outperforms existing methods on multiple real-world datasets in both conventional and cross-domain log anomaly detection. | 10.1109/TNSM.2026.3654642 |
| Ze Wei, Rongxi He, Chengzhi Song, Xiaojing Chen | Differentiated Offloading and Resource Allocation with Energy Anxiety Level Consideration in Heterogeneous Maritime Internet of Things | 2026 | Early Access | Internet of Things Resource management Carbon footprint Servers Reviews Packet loss Heterogeneous networks Green energy Delays Anxiety disorders Mobile Edge Computing Task Offloading Resource Allocation Carbon Footprint Minimization | The popularity of maritime activities not only exacerbates the carbon footprint (CF) but also places higher demands on Maritime Internet of Things (MIoTs) to support heterogeneous MIoT devices (MIoTDs) with different prioritized tasks. High-priority tasks can be processed cooperatively via local computation, offloading to nearby MIoTDs (helpers), or offloading to edge servers to ensure their timely and successful completion. Due to the differences in energy availability and rechargeability, MIoTDs exhibit distinct energy states, impacting their operational behaviors. We propose the Energy Anxiety Level (EAL) to quantify these states: Higher EAL tends to lead to increased packet dropping and earlier shutdown. Although low-EAL MIoTDs seem preferable as helpers, their scarce residual computational resources after local task completion may cause offloaded high-priority tasks to drop or time out. Therefore, helper selection should jointly consider candidate MIoTDs’ EALs and loads to evaluate their unsuitability. This paper addresses the problem of differentiated task offloading and resource allocation in MIoTs by formulating it as a mixed integer nonlinear programming model. The objective is to minimize system-wide carbon footprint (CF), packet loss, helper unsuitability risk, and high-priority task latency. To solve this complex problem, we decompose it into two subproblems. We then design algorithms to determine optimal offloading patterns, task partitioning factors, MIoTD transmission powers, and computation resource allocation for MIoTDs and edge servers. Simulation results demonstrate that our proposal outperforms benchmarks in reducing CF and EAL, lowering high-priority task latency, and improving task completion ratio. | 10.1109/TNSM.2026.3655385 |
| Xiaofeng Liu, Naigong Zheng, Fuliang Li | Don’t Let SDN Obsolete: Interpreting Software-Defined Networks with Network Calculus | 2026 | Early Access | Delays Calculus Analytical models Optimization Kernel Queueing analysis Table lookup Quality of service Mathematical models Data centers Software-Defined Networking network calculus delay analysis performance optimization | Although Software-Defined Network (SDN) has gained popularity in real-world deployments for its flexible management paradigm, its centralized control principle leads to various known performance issues. In this paper, we propose SDN-Mirror, a novel generalized delay analytical model based on network calculus, to interpret how the performance is affected and to illustrate how to accelerate the performance as well. We first elaborate the impact of parameters on packet forwarding delay in SDN, including device capacity, flow features and cache size. Then, building upon the analysis, we establish SDN-Mirror, which acts like a mirror, capable of not only precisely representing the relation between packet forwarding delay and each parameter but also verifying the effectiveness of optimization policies. At last, we evaluate SDN-Mirror by quantifying how each parameter affects the forwarding delay under different table matching states. We also verify a performance improvement policy with the optimized SDN-Mirror and experiment results show that packet forwarding delays of kernel space matching flow, userspace matching flow and unmatched flow can be reduced by 39.8%, 20.7% and 13.2%, respectively. | 10.1109/TNSM.2026.3655704 |
| Xinshuo Wang, Lei Liu, Baihua Chen, Yifei Li | ENCC: Explicit Notification Congestion Control in RDMA | 2026 | Early Access | Bandwidth Data centers Heuristic algorithms Accuracy Throughput Hardware Switches Internet Convergence Artificial intelligence Congestion Control RDMA Programmable Switch FPGA | Congestion control (CC) is essential for achieving ultra-low latency, high bandwidth, and network stability in high-speed networks. However, modern high-performance RDMA networks, crucial for distributed applications, face significant performance degradation due to limitations of existing CC schemes. Most conventional approaches rely on congestion notification signals that must traverse the queuing data path before congestion signals can be sent back to the sender, causing delayed responses and severe performance collapse. This study proposes Explicit Notification Congestion Control (ENCC), a novel high-speed CC mechanism that achieves low latency, high throughput, and strong network stability. ENCC employs switches to directly notify the sender of precise link load information and avoid notification signal queuing. This allows precise sender-side rate control and queue regulation. ENCC also ensures fairness and easy deployment in hardware. We implement ENCC based on FPGA network interface cards and programmable switches. Evaluation results show that ENCC achieves substantial through-put improvements over representative baseline algorithms, with gains of up to 16.6× in representative scenarios, while incurring minimal additional latency. | 10.1109/TNSM.2026.3656015 |
| Awaneesh Kumar Yadav, An Braeken, Madhusanka Liyanage | A Provably Secure Lightweight Three-factor 5G-AKA Authentication Protocol relying on an Extendable Output Function | 2026 | Early Access | Authentication Protocols Security 5G mobile communication Internet of Things Protection Logic Formal verification Encryption Cryptography Authentication 5G-AKA Internet of Things (IoT) GNY logic ROR logic network security scyther tool | Compared to 4G, the designed authentication and key agreement protocol for 5G communication (5G-AKA) offers better security. State-of-the-art shows that various protocols indicate the flaws in the 5G-AKA and suggest solutions primarily for the desynchronization attack, traceability attack, and perfect forward secrecy. However, most authentication protocols fail to facilitate the device stolen attack and are expensive; they also do not consider the prominent security issues such as post-compromise security and non-repudiation. Considering the above demerits of these protocols and the necessity to offer additional security, a provably secure lightweight 5G-AKA multi-factor authentication protocol relying on an extendable output function is proposed. The security of the proposed work has been confirmed informally and formally (ROR logic, GNY logic, and Scyther tool) to ensure that the proposed work handles all types of attacks and offers additional security features, such as post-compromise features and non-repudiation. Furthermore, we compute the performance of the proposed work and compare it with its counterparts to show that our work is less costly and more suitable for lightweight devices than others in terms of computational, communication, storage, and energy consumption cost. | 10.1109/TNSM.2026.3656167 |
| Qian Yang, Suoping Li, Jaafar Gaber, Sa Yang | An Optimal Matching Channel Selection Strategy Based on (K+1)-layer 3-D CTMC for Suppressing Spectrum Fragmentation in 5G/B5G Cognitive Radio Ad Hoc Networks | 2026 | Early Access | Copper Three-dimensional displays Cognitive radio Quality of service Games Analytical models Ad hoc networks Complexity theory System performance Solid modeling 5G/B5G cognitive radio ad hoc networks channel selection spectrum utilization 3-D CTMC | Dynamic spectrum access (DSA) is one of the pivotal technologies that is widely recognized to be able to cope with the massive demand for limited spectrum resources by massive data in 5G/B5G networks. To address spectrum fragmentation and sharing in 5G/B5G cognitive radio ad hoc networks (CRAHNs), based on the DSA technique, this paper proposes an optimal matched channel selection strategy with finite buffer (OMCS-FB). In the OMCS-FB, a cognitive user (CU) with the transmission request selects the channel whose idle time optimally matches its transmission time rather than selecting the channel with the longest idle time; if the CU fails to access the channel, the CU enters the buffer and waits for the next transmission opportunity. A (K+1)-layer continuous-time Markov chain (CTMC) with the number of primary users (PUs) and CUs in primary channels and the number of CUs in the buffer as 3-D metrics is established, which can effectively portray the activity behavior of users and the occupancy states of primary channels under the OMCS-FB. The CTMC rate steady-state equations are then solved using the successive over-relaxation (SOR) iterative algorithm to obtain the system steady-state probability distributions and performance metrics. The results show that the OMCS-FB effectively suppresses spectrum fragmentation of the MAC layer in the time dimension and enables efficient spectrum sharing among CUs and PUs, as verified by Monte Carlo simulation. | 10.1109/TNSM.2026.3656378 |
| Divya D Kulkarni, Manit Baser, Mohan Gurusamy | ARCANE: Adversarial Resilience and Adaptive Network Slicing for UAV-based MEC | 2026 | Early Access | Autonomous aerial vehicles Servers Power demand 5G mobile communication Resilience Network slicing Delays Resource management Artificial intelligence Trajectory 5G MEC provisioning UAV network ET-DQN SPLiT adversarial attacks | Network slicing and Multi-access Edge Computing (MEC) are pivotal elements of 5G communication technology, enabling diverse, low-latency services to distributed users. Unmanned Aerial Vehicles (UAVs) are being increasingly explored in delivering these services temporarily to remote locations, supporting surveillance in regions with restricted ground connectivity, monitoring urban traffic, and disaster relief. However, the resource constraints of UAVs demand efficient optimization strategies. While Artificial Intelligence (AI)-driven methods like Deep Reinforcement Learning (DRL) offer promising potential in optimizing service delays and minimizing power consumption with fewer UAVs, they remain vulnerable to adversarial attacks. This study evaluates two adversarial attacks against DRL baselines: a targeted service disruption attack that impacts the DRL environment to degrade decision-making and service quality, and an action bit-flipping attack that alters UAV selection, resulting in suboptimal provisioning. To address these vulnerabilities, we propose ARCANE, a resilient DRL-based multi-slice MEC framework for UAVs. ARCANE introduces the Exploratory-Thompson Deep-Q Network (ET-DQN), which leverages Thompson Sampling to effectively balance exploration and exploitation under adversarial conditions, optimizing UAV selection for MEC provisioning. Extensive experiments demonstrate that ARCANE outperforms baseline approaches, achieving ~ 4× faster mitigation of the environmental attack and ~ 2× quicker recovery from the attack on the actions. Moreover, we illustrate that ARCANE demonstrates strong resilience by effectively limiting the degradation in hovering time caused by the attacks. | 10.1109/TNSM.2026.3656271 |
| Marija Gajić, Marcin Bosk, Stanislav Lange, Thomas Zinner | QoE-Aware Transport Slicing Configuration: Improving Application Performance in Beyond-5G Networks | 2026 | Early Access | Quality of service Quality of experience Resource management 5G mobile communication Network slicing Throughput Bit rate Guidelines Optimization Mathematical models Beyond 5G networks QoE resource utilization buffer size QoS Flows network slicing | 5G and beyond provides connectivity for a variety of heterogeneous, often mission-critical services, placing stringent performance requirements on these systems. Providing satisfactory Quality of Experience (QoE) for diverse, coexisting applications prompts the network operators to enforce application-aware, efficient resource allocation schemes that can improve user-satisfaction, efficiency, and system utilization. For these purposes, QoS Flows and network slicing have been identified as key enablers. Those concepts move away from economy of scale, towards a fine-grained slice and flow handling with customized resource control for each application, application type, or slice. This work is particularly focused on transport slicing, where the shift towards fine-grained resource control has important implications for how network resources are scaled and optimally allocated. These aspects have been largely ignored in the existing literature. Furthermore, while capacity has been recognized as a key resource, selecting the appropriate queue size, granularity of the resource allocation scheme, and their relations with the number of clients are often neglected in the process of resource dimensioning. To address these shortcomings, we perform an in-depth evaluation of the effects that impact factors have on the overall QoE and system utilization using the OMNeT++ simulator. We show the optimization potential for QoE and resource utilization, and further formulate guidelines for efficient and QoE-aware resource allocation. | 10.1109/TNSM.2026.3656605 |
| Samayveer Singh, Aruna Malik, Vikas Tyagi, Rajeev Kumar, Neeraj Kumar, Shakir Khan, Mohd Fazil | Dynamic Energy Management in Heterogeneous Sensor Networks Using Hippopotamus-Inspired Clustering | 2026 | Vol. 23, Issue | Wireless sensor networks Clustering algorithms Optimization Heuristic algorithms Routing Energy efficiency Protocols Scalability Genetic algorithms Batteries Internet of Things energy efficiency cluster head network-lifetime | The rapid expansion of smart technologies and IoT has made Wireless Sensor Networks (WSNs) essential for real-time applications such as industrial automation, environmental monitoring, and healthcare. Despite advances in sensor node technology, energy efficiency remains a key challenge due to the limited battery life of nodes, which often operate in remote environments. Effective clustering, where Cluster Heads (CHs) manage data aggregation and transmission, is crucial for optimizing energy use. Motivated from the above, in this paper, we introduce a novel metaheuristic approach called Hippopotamus Optimization-Based Cluster Head Selection (HO-CHS), designed to enhance CH selection by dynamically considering factors such as residual energy, node location, and network topology. Inspired by natural behaviors, HO-CHS effectively balances energy loads, reduces communication distances, and boosts network scalability and reliability. The proposed scheme achieves a 35% increase in network lifetime and a 40% improvement in stability period in comparison to the other existing schemes in literature. Simulation results demonstrate that HO-CHS significantly reduces energy consumption and enhances data transmission efficiency, making it ideal for IoT-enabled consumer electronics networks requiring consistent performance and energy conservation. | 10.1109/TNSM.2025.3618766 |
| Dezhang Kong, Xiang Chen, Hang Lin, Zhengyan Zhou, Yi Shen, Hongyan Liu, Qiumei Cheng, Xuan Liu, Dong Zhang, Chunming Wu, Muhammad Khurram Khan | Toward Security-Enhanced In-Band Network Telemetry in Programmable Networks | 2026 | Vol. 23, Issue | Security Metadata Telemetry Control systems Monitoring Encryption Protection Pipelines Faces Complexity theory In-band network telemetry programmable network security attack | In-band Network Telemetry (INT) is a widely used monitoring framework in modern large-scale networks. It provides packet-level visibility into network conditions by inserting telemetry data into packets, enabling unprecedented fine-grained network management. However, this mechanism also introduces new vulnerabilities that malicious attackers can exploit. In this paper, we present eight In-band Network Telemetry Manipulation Attacks that take advantage of INT’s weakness, demonstrating that attackers can cause severe damage with little effort by manipulating INT packets. To address this issue, we designed SecureINT, a security-enhanced INT prototype that provides encryption and integrity verification for INT packets. Specifically, SecureINT deploys Even-Mansour and SipHash for confidentiality and integrity, respectively. It also uses a zero-delay rotation mechanism, which enables administrators to dynamically change the version of the deployed Even-Mansour/SipHash running on programmable switches without the need to re-install new programs. In this way, SecureINT can provide lasting security for INT packets using the limited resources of programmable switches. According to the experiments, SecureINT can be deployed on programmable switches using a single pipeline. Besides, the overhead of the rotation mechanism running on the control plane is still minimal. | 10.1109/TNSM.2024.3504563 |
| Monolina Dutta, Anoop Thomas, B. Sundar Rajan | Novel Delivery Algorithms for Decentralized Multi-Access Coded Caching Systems | 2026 | Vol. 23, Issue | Prefetching Servers Indexes Encoding Topology Content distribution networks Cache memory Vectors Numerical models Network topology Coded caching content delivery networks decentralized caching index coding multi-access coded caching | In this paper, we propose a multi-access coded caching system under decentralized setting tailored for Content Delivery Networks (CDNs). In this system, a central server hosts N files, each of size F bits, and serves $K\leq N$ users through a shared link. The network is equipped with c caches, each with a capacity of MF bits, distributed across the network, where each of the K users is connected to a random set of $r\leq c$ caches. Initially, we consider a model where each cache subset is accessed by an equal number of users. We introduce a novel content delivery algorithm for the central server, which allows us to derive a closed-form expression for the per user transmission rate. Using techniques from index coding, we prove the optimality of the proposed delivery scheme. Additionally, we extend the model to propose a more general and novel framework by allowing each subset of caches to serve an arbitrary number of users, thereby greatly enhancing the system’s flexibility and applicability. We also propose a new delivery algorithm tailored to this generalized setting and demonstrate its optimality under specific user-to-cache association scenarios. Numerical results demonstrate that, in a specific scenario where the user-to-cache associations do not satisfy the optimality conditions, the proposed generalized scheme shows improvement over the order-optimal state-of-the-art decentralized multi-access coded caching scheme for small cache sizes. Specifically, when approximately 25% of the content is stored at every cache, the proposed scheme achieves up to a 20% reduction in the per user transmission rate. Considering that both schemes serve an equal number of users, the observed improvements indicate a potential reduction in server bandwidth requirements, lower latency, and enhanced energy efficiency during content delivery. | 10.1109/TNSM.2025.3629715 |
| Haftay Gebreslasie Abreha, Ilora Maity, Youssouf Drif, Christos Politis, Symeon Chatzinotas | Revenue-Aware Seamless Content Distribution in Satellite-Terrestrial Integrated Networks | 2026 | Vol. 23, Issue | Satellites Topology User experience Network topology Delays Real-time systems Optimization Low earth orbit satellites Collaboration Servers Satellite edge computing (SEC) content caching content distribution dynamic ad insertion | With the surging demand for data-intensive applications, ensuring seamless content delivery in Satellite-Terrestrial Integrated Networks (STINs) is crucial, especially for remote users. Dynamic Ad Insertion (DAI) enhances monetization and user experience, while Mobile Edge Computing (MEC) in STINs enables distributed content caching and ad insertion. However, satellite mobility and time-varying topologies cause service disruptions, while excessive or poorly placed ads risk user disengagement, impacting revenue. This paper proposes a novel framework that jointly addresses three challenges: (i) service continuity- and topology-aware content caching to adapt to STIN dynamics, (ii) Distributed DAI (D-DAI) that minimizes feeder link load and storage overhead by avoiding redundant ad-variant content storage through distributed ad stitching, and (iii) revenue-aware content distribution that explicitly models user disengagement due to ad overload to balance monetization and user satisfaction. We formulate the problem as two hierarchical Integer Linear Programming (ILP) optimizations: one content caching that aims to maximize cache hit rate and another optimizing content distribution with DAI to maximize revenue, minimize end-user costs, and enhance user experience. We develop greedy algorithms for fast initialization and a Binary Particle Swarm Optimization (BPSO)–based strategy for enhanced performance. Simulation results demonstrate that the proposed approach achieves over a 4.5% increase in revenue and reduces cache retrieval delay by more than 39% compared to the benchmark algorithms. | 10.1109/TNSM.2025.3629810 |