Last updated: 2024-10-22 03:01 UTC
All documents
Number of pages: 128
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Ru Huo, Xiangfeng Cheng, Chuang Sun, Tao Huang | A Cluster-Based Data Transmission Strategy for Blockchain Network in the Industrial Internet of Things | 2024 | Early Access | Blockchains Industrial Internet of Things Edge computing Data communication Computer architecture Topology Cloud computing Industrial Internet of Things (IIoT) blockchain edge computing clustering data transmission strategy | The proliferation of devices and data in the Industrial Internet of Things (IIoT) has rendered the traditional centralized cloud model unable to meet the stringent requirements of wide-scale and low latency in these IIoT scenarios. As emerging technologies, edge computing enables real-time processing and analysis on devices situated closer to the data source while reducing bandwidth requirements. Blockchain, being decentralized, could enhance data security. Therefore, edge computing and blockchain are integrated in IIoT to reduce latency and improve security. However, the inefficient data transmission of blockchain leads to increased transmission latency in the IIoT. To address this issue, we propose a cluster-based data transmission strategy (CDTS) for blockchain network. Initially, an improved weighted label propagation algorithm (WLPA) is proposed for clustering blockchain nodes. Subsequently, a spanning tree topology construction (STTC) is designed to simplify the blockchain network topology, based on the above node clustering results. Additionally, leveraging clustered nodes and tree topology, we propose a data transmission strategy to speed up data transmission. Simulation experiments show that CDTS effectively reduces data transmission time and better supports large-scale IIoT scenarios. | 10.1109/TNSM.2024.3387120 |
Pratyush Dikshit, Mike Kosek, Nils Faulhaber, Jayasree Sengupta, Vaibhav Bajpai | Evaluating DNS Resiliency and Responsiveness With Truncation, Fragmentation & DoTCP Fallback | 2024 | Early Access | Domain Name System Resilience Probes Internet Time factors Servers IP networks DNS DNS-over-TCP DNS-over-UDP Response Time Failure Rate EDNS(0) | Since its introduction in 1987, the DNS has become one of the core components of the Internet. While it was designed to work with both TCP and UDP, DNS-over-UDP (DoUDP) has become the default option due to its low overhead. As new Resource Records were introduced, the sizes of DNS responses increased considerably. This expansion of the message body has led to truncation and IP fragmentation more often in recent years where large UDP responses make DNS an easy vector for amplifying denial-of-service attacks which can reduce the resiliency of DNS services. This paper investigates the resiliency, responsiveness, and usage of DoTCP and DoUDP over IPv4 and IPv6 for 10 widely used public DNS resolvers. The paper specifically measures the resiliency of the DNS infrastructure in the age of increasing DNS response sizes that lead to truncation and fragmentation. Our results offer key insights into the management of robust and reliable DNS network services. While DNS Flag Day 2020 recommends 1232 bytes of buffer sizes, we find out that 3/10 resolvers mainly announce very large EDNS(0) buffer sizes both from the edge as well as from the core, which potentially causes fragmentation. In reaction to large response sizes from authoritative name servers, we find that resolvers do not fall back to the usage of DoTCP in many cases, bearing the risk of fragmented responses. As the message sizes in the DNS are expected to grow further, this problem will become more urgent in the future. This paper demonstrates the key results (particularly as a consequence of the DNS Flag Day 2020) which may support network service providers make informed choices to better manage their critical DNS services. | 10.1109/TNSM.2024.3365303 |
Bing Shi, Zhifeng Chen, Zhuohan Xu | A Deep Reinforcement Learning Based Approach for Optimizing Trajectory and Frequency in Energy Constrained Multi-UAV Assisted MEC System | 2024 | Early Access | Autonomous aerial vehicles Task analysis Trajectory Optimization Servers Computer architecture Computational modeling Mobile Edge Computing Unmanned Aerial Vehicle Multi-Agent Deep Reinforcement Learning | Mobile Edge Computing (MEC) is a technology that shows great promise in enhancing the computational power of smart devices (SDs) in the Internet of Things (IoT). However, the fixed location and limited coverage of MEC servers constrain their performance. To overcome this issue, this paper explores a multiple unmanned aerial vehicle (UAV) assisted MEC system. The proposed system considers a scenario where multiple UAVs work together to provide computing services while dynamically adjusting their frequency based on the task size, under the constraint of limited energy. This paper aims to maximize computation bits, SDs’ fairness, and UAVs’ load balancing in multi-UAV assisted MEC system by jointly optimizing the trajectory and frequency. To address this challenge, we model it as a Partially Observable Markov Decision Process and propose a joint optimization strategy based on multi-agent deep reinforcement learning. The effectiveness of the proposed strategy is evaluated on both synthetic and realistic datasets. The results demonstrate that our strategy outperforms other benchmark strategies. | 10.1109/TNSM.2024.3362949 |
Hao Xu, Harry Chang, Kun Qiu, Yang Hong, Wenjun Zhu, Xiang Wang, Baoqian Li, Jin Zhao | Accelerating Deep Packet Inspection With SIMD-Based Multi-Literal Matching Engine | 2024 | Early Access | Engines Software algorithms Inspection Telecommunication traffic Software Payloads Network security network security DPI SIMD parallel computing | Deep Packet Inspection (DPI) has been one of the most significant network security techniques. It is widely used to identify and classify network traffic in various applications such as web application firewall and intrusion detection. Different from traditional packet filtering that only examines packet headers, DPI detects payloads as well by comparing them with an existing signature database. The literal matching engine, which plays a key role in DPI, is the primary determinant of the system performance. FDR, an engine that utilizes 3 SIMD operations to match 1 character with multiple literals, has been developed and is currently one of the fastest literal matching engines. However, FDR has significant performance drop-off when faced with small-scale literal rule sets, whose proportion is more than 90% in modern databases. In this paper, we designed Teddy, an engine that is highly optimized for small-scale literal rule sets. Compared with FDR, Teddy significantly improves the matching efficiency by a novel shift-or matching algorithm that can simultaneously match up to 64 characters with only 15 SIMD operations. We evaluate Teddy with real-world traffic and rule sets. Experimental results show that its performance is up to 43.07x that of Aho-corasick (AC) and 2.17x that of FDR. Teddy has been successfully integrated into Hyperscan, together with which it is widely deployed in modern popular DPI applications such as Snort and Suricata. | 10.1109/TNSM.2024.3354985 |
Chang-Lin Chen, Hanhan Zhou, Jiayu Chen, Mohammad Pedramfar, Tian Lan, Zheqing Zhu, Chi Zhou, Pol Mauri Ruiz, Neeraj Kumar, Hongbo Dong, Vaneet Aggarwal | Learning-Based Two-Tiered Online Optimization of Region-Wide Datacenter Resource Allocation | 2024 | Early Access | Cloud Computing Capacity Reservation Deep Reinforcement Learning Explainable Reinforcement Learning | Online optimization of resource management for large-scale data centers and infrastructures to meet dynamic capacity reservation demands and various practical constraints (e.g., feasibility and robustness) is a very challenging problem. Mixed Integer Programming (MIP) approaches suffer from recognized limitations in such a dynamic environment, while learning-based approaches may face with prohibitively large state/action spaces. To this end, this paper presents a novel two-tiered online optimization to enable a learning-based Resource Allowance System (RAS). To solve optimal server-to-reservation assignment in RAS in an online fashion, the proposed solution leverages a reinforcement learning (RL) agent to make high-level decisions, e.g., how much resource to select from the Main Switch Boards (MSBs), and then a low-level Mixed Integer Linear Programming (MILP) solver to generate the local server-to-reservation mapping, conditioned on the RL decisions. We take into account fault tolerance, server movement minimization, and network affinity requirements and apply the proposed solution to large-scale RAS problems. To provide interpretability, we further train a decision tree model to explain the learned policies and to prune unreasonable corner cases at the low-level MILP solver, resulting in further performance improvement. Extensive evaluations show that our two-tiered solution outperforms baselines such as pure MIP solver by over 15% while delivering 100× speedup in computation. | 10.1109/TNSM.2024.3484213 |
Bing Tang, Zhikang Wu, Wei Xu, Buqing Cao, Mingdong Tang, Qing Yang | TP-MDU: A Two-Phase Microservice Deployment Based on Minimal Deployment Unit in Edge Computing Environment | 2024 | Early Access | Microservice architectures Optimization Quality of service Dynamic scheduling Servers Reinforcement learning Resource management Cloud computing Time factors Load modeling mobile edge computing microservices minimal deployment unit two-phase deployment reinforcement learning | In mobile edge computing (MEC) environment, effective microservices deployment significantly reduces vendor costs and minimizes application latency. However, existing literatures overlook the impact of dynamic characteristics such as the frequency of user requests and geographical location, and lack in-depth consideration of the types of microservices and their interaction frequencies. To address these issues, we propose TP-MDU, a novel two-stage deployment framework for microservices. This framework is designed to learn users’ dynamic behaviors and introduces, for the first time, a minimal deployment unit. Initially, TP-MDU generates minimal deployment units online, tailored to the types of microservices and their interaction frequencies. In the initial deployment phase, aiming for load balancing, it employs a simulated annealing algorithm to achieve a superior deployment plan. During the optimization scheduling phase, it utilizes reinforcement learning algorithms and introduces dynamic information and new optimization objectives. Previous deployment plans serve as the initial state for policy learning, thus facilitating more optimal deployment decisions. This paper evaluates the performance of TP-MDU using a real dataset from Australia’s EUA and some related synthetic data. The experimental results indicate that TP-MDU outperforms other representative algorithms in performance. | 10.1109/TNSM.2024.3483634 |
Lifan Mei, Jinrui Gou, Jingrui Yang, Yujin Cai, Yong Liu | On Routing Optimization in Networks With Embedded Computational Services | 2024 | Early Access | Routing Computational modeling Delays Optimization Heuristic algorithms Servers Load modeling Resilience Performance evaluation Resource management Routing Edge Computing In-Network Computation Network Function Virtualization | Modern communication networks are increasingly equipped with in-network computational capabilities and services. Routing in such networks is significantly more complicated than the traditional routing. A legitimate route for a flow not only needs to have enough communication and computation resources, but also has to conform to various application-specific routing constraints. This paper presents a comprehensive study on routing optimization problems in networks with embedded computational services. We develop a set of routing optimization models and derive low-complexity heuristic routing algorithms for diverse computation scenarios. For dynamic demands, we also develop an online routing algorithm with performance guarantees. Through evaluations over emerging applications on real topologies, we demonstrate that our models can be flexibly customized to meet the diverse routing requirements of different computation applications. Our proposed heuristic algorithms significantly outperform baseline algorithms and can achieve close-to-optimal performance in various scenarios. | 10.1109/TNSM.2024.3483088 |
Jun Tang, Bing Guo, Yan Shen, Sahil Garg, Georges Kaddoum, M. Shamim Hossain | A Data Completion Algorithm Based on Low-Rank Prior Knowledge for Data-Driven Applications | 2024 | Early Access | Tensors Accuracy Matrix decomposition Noise reduction Consumer electronics Deep learning Computational modeling Proposals Data mining Training Tensor Ring Completion Internet of Things Data Recovery Low-rank Prior Knowledge | Low rank tensor ring based data recovery algorithms have been widely used in data-driven consumer electronics to recover missing data entries in the collecting data pre-processing stage for providing stable and reliable service. However, traditional recovery methods often fail to utilize the abundant prior knowledge of data and the non-local self-similarity of the data, thus leading to the failure to effectively capture the spatial relationships within high-dimensional data to recover them accurately. To address these problems, we present a novel Non-local Self-similarity and Low-rank Prior Knowledge based tensor ring completion method. Firstly, we incorporate the BM3D denoising operator within a Plug-and-Play framework to exploit the self-similarity in the data. Then a logarithmic determinant function is integrated to distinguish singular values in the cyclic unfolding matrix of the tensor and adopts a tensor ring completion approach based on weighted nuclear norms. Finally, in order to evaluate the effectiveness of our proposed method, we conducted a series of experiments by using the missing image dataset and the missing traffic data dataset respectively, and the experimental results show that our method achieves the highest level in terms of data recovery accuracy. | 10.1109/TNSM.2024.3483013 |
Yujie Zhao, Tao Peng, Yichen Guo, Yijing Niu, Wenbo Wang | An Intelligent Scheme for Energy-Efficient Uplink Resource Allocation With QoS Constraints in 6G Networks | 2024 | Early Access | Interference Quality of service Optimization Resource management Femtocells Energy efficiency 6G mobile communication Training Accuracy Complexity theory Energy efficiency resource allocation quality-of-service reinforcement learning fractional programming | In sixth-generation (6G) networks, the dense deployment of femtocells will result in significant co-channel interference. However, current studies encounter difficulties in obtaining precise interference information, which poses a challenge in improving the performance of the resource allocation (RA) strategy. This paper proposes an intelligent scheme aimed at achieving energy-efficient RA in uplink scenarios with unknown interference. Firstly, a novel interference-inference-based RA (IIBRA) framework is proposed to support this scheme. In the framework, the interference relationship between users is precisely modeled by processing the historical operation data of the network. Based on the modeled interference relationship, accurate performance feedback to the RA algorithm is provided. Secondly, a joint double deep Q-network and optimization RA (DORA) algorithm is developed, which decomposes the joint allocation problem into two parts: resource block assignment and power allocation. The two parts continuously interact throughout the allocation process, leading to improved solutions. Thirdly, a new metric called effective energy efficiency (EEE) is provided, which is defined as the product of energy efficiency and average user satisfaction with quality of service (QoS). EEE is used to help train the neural networks, resulting in a superior level of user QoS satisfaction. Numerical results demonstrate that the DORA algorithm achieves a clear enhancement in interference efficiency, surpassing well-known existing algorithms with a maximum improvement of over 50%. Additionally, it achieves a maximum EEE improvement exceeding 25%. | 10.1109/TNSM.2024.3482549 |
Yahuza Bello, Ahmed Refaey Hussein | Dynamic Policy Decision/Enforcement Security Zoning Through Stochastic Games and Meta Learning | 2024 | Early Access | Security Games Stochastic processes Next generation networking Zero Trust Metalearning NIST Reinforcement learning Heuristic algorithms Cyberattack Reinforcement learning dynamic policy stochastic games security zero trust core network entities zoning strategy zero trust architecture | Securing Next Generation Networks (NGNs) remains a prominent topic of discussion in academia and industries alike, driven by the rapid evolution of cyber attacks. As these attacks become increasingly complex and dynamic, it is crucial to develop sophisticated security strategies with automated dynamic policy enforcement. In this paper, we propose a security strategy based on the zero-trust model, incorporating dynamic policy decisions through the utilization of stochastic games and Reinforcement Learning (RL). Our approach involves the development of an attack and defense strategy evolution model, specifically tailored to combat cyber attacks in NGNs. To achieve this, we employ RL techniques to update and adapt dynamic policies. To train the agents, we utilize the Generalized Proximal Policy Optimization with sample reuse (GePPO) algorithm, including its modified version, GePPO-ML, which incorporates meta-learning to initialize the agent’s policy and parameters. Additionally, we employ the Sample Dropout PPO with meta-learning (SDPPO-ML), a modified version of the SD-PPO algorithm, to train the agents. To evaluate the performance of these algorithms, we conduct a comparative analysis against the REINFORCE and PPO algorithms. The results illustrate the superior performance of both GePPO-ML and SDPPO-ML when compared to these baseline algorithms, with GePPO-ML exhibiting the best performance. | 10.1109/TNSM.2024.3481662 |
Endri Goshi, Fidan Mehmeti, Thomas F. La Porta, Wolfgang Kellerer | Modeling and Analysis of mMTC Traffic in 5G Core Networks | 2024 | Early Access | Traffic control 5G mobile communication Planning Predictive models Ultra reliable low latency communication Radio access networks Computer architecture Communication networks Time-frequency analysis Temperature measurement Traffic characteristics 5G mMTC RAN Core Network | Massive Machine-Type Communications (mMTC) are one of the three main use cases powered by 5G and beyond networks. These are distinguished by the need to serve a large number of devices which are characterized by non-intensive traffic and low energy consumption. While the sporadic nature of the mMTC traffic does not pose an exertion on the efficient operation of the network, multiplexing the traffic from a large number of these devices within the cell certainly does. This traffic from the Base Station (BS) is then transported further towards the Core Network (CN), where it is combined with the traffic from other BSs. Therefore, planning carefully the network resources, both on the Radio Access Network (RAN) and the CN, for this type of traffic is of paramount importance. To do this, the statistics of the traffic pattern that arrives at the BS and the CN should be known. To this end, in this paper, we derive first the distribution of the inter-arrival times of the traffic at the BS from a general number of mMTC users within the cell, assuming a generic distribution of the traffic pattern by individual users. Then, using the previous result we derive the distribution of the traffic pattern at the CN. Further, we validate our results on traces for channel conditions and by performing measurements in our testbed. Results show that adding more mMTC users in the cell and more BSs in the network in the long term does not increase the variability of the traffic pattern at the BS and at the CN. Furthermore, this arrival process at all points of our interest in the network is shown to be Poisson both for homogeneous and heterogeneous traffic. However, the empirical observations show that a huge number of packets is needed for this process to converge, and this number of packets increases with the number of users and/or BSs. | 10.1109/TNSM.2024.3481240 |
Niloy Saha, Nashid Shahriar, Muhammad Sulaiman, Noura Limam, Raouf Boutaba, Aladdin Saleh | Monarch: Monitoring Architecture for 5G and Beyond Network Slices | 2024 | Early Access | Monitoring 5G mobile communication Computer architecture Accuracy Network slicing Scalability Data mining Containers Adaptive systems 3GPP 5G Network Slicing KPI Monitoring Open5GS | Data-driven algorithms play a pivotal role in the automated orchestration and management of network slices in 5G and beyond networks, however, their efficacy hinges on the timely and accurate monitoring of the network and its components. To support 5G slicing, monitoring must be comprehensive and encompass network slices end-to-end (E2E). Yet, several challenges arise with E2E network slice monitoring. Firstly, existing solutions are piecemeal and cannot correlate network-wide data from multiple sources (e.g., different network segments). Secondly, different slices can have different requirements regarding Key Performance Indicators (KPIs) and monitoring granularity, which necessitates dynamic adjustments in both KPI monitoring and data collection rates in real-time to minimize network resource overhead. To address these challenges, in this paper, we present Monarch, a scalable monitoring architecture for 5G. Monarch is designed for cloud-native 5G deployments and focuses on network slice monitoring and per-slice KPI computation. We validate the proposed architecture by implementing Monarch on a 5G network slice testbed, with up to 50 network slices. We exemplify Monarch’s role in 5G network monitoring by showcasing two scenarios: monitoring KPIs at both slice and network function levels. Our evaluations demonstrate Monarch’s scalability, with the architecture adeptly handling varying numbers of slices while maintaining consistent ingestion times between 2.25 to 2.75 ms. Furthermore, we showcase the effectiveness of Monarch’s adaptive monitoring mechanism, exemplified by a simple heuristic, on a real-world 5G dataset. The adaptive monitoring mechanism significantly reduces the overhead of network slice monitoring by up to 76% while ensuring acceptable accuracy. | 10.1109/TNSM.2024.3479246 |
Krishna Pal Thakur, Basabdatta Palit | A QoS-Aware Uplink Spectrum and Power Allocation With Link Adaptation for Vehicular Communications in 5G Networks | 2024 | Early Access | Resource management 5G mobile communication Quality of service Interference Delays Bandwidth Uplink Vehicular ad hoc networks Power control Modulation Resource Allocation Vehicle-to-Vehicle C-V2X 5G Link Adaptation 28GHz Hungarian Multi-Numerology | In this work, we have proposed link adaptation-based spectrum and power allocation algorithms for the uplink communication in 5G Cellular Vehicle-to-Everything (C-V2X) systems. In C-V2X, vehicle-to-vehicle (V2V) users share radio resources with vehicle-to-infrastructure (V2I) users. Existing works primarily focus on the optimal pairing of V2V and V2I users, assuming that each V2I user needs a single resource block (RB) while minimizing interference through power allocation. In contrast, in this work, we have considered that the number of RBs needed by the users is a function of their channel condition and Quality of Service (QoS) -a method called link adaptation. It effectively compensates for the frequent channel quality fluctuations at the high frequencies of 5G communication. 5G uses a multi-numerology frame structure to support diverse QoS requirements, which has also been considered in this work. The first algorithm proposed in this article greedily allocates RBs to V2I users using link adaptation. It then uses the Hungarian algorithm to pair V2V with V2I users while minimizing interference through power allocation. The second proposed method groups RBs into resource chunks (RCs) and uses the Hungarian algorithm twice -first to allocate RCs to V2I users and then to pair V2I users with V2V users. Extensive simulations reveal that link adaptation increases the number of satisfied V2I users and their sum rate while also improving the QoS of V2I and V2V users, making it indispensable for 5G C-V2X systems. | 10.1109/TNSM.2024.3479870 |
Yu-Zhen Janice Chen, Daniel S. Menasché, Don Towsley | On Collaboration in Distributed Parameter Estimation With Resource Constraints | 2024 | Early Access | Collaboration Estimation Data collection Correlation Distributed databases Parameter estimation Optimization Vectors Wireless sensor networks Resource management Distributed Parameter Estimation Sequential Estimation Sensor Selection Vertically Partitioned Data Fisher Information Multi-Armed Bandit (MAB) Kalman Filter | Effective resource allocation in sensor networks, IoT systems, and distributed computing is essential for applications such as environmental monitoring, surveillance, and smart infrastructure. Sensors or agents must optimize their resource allocation to maximize the accuracy of parameter estimation. In this work, we consider a group of sensors or agents, each sampling from a different variable of a multivariate Gaussian distribution and having a different estimation objective. We formulate a sensor or agent’s data collection and collaboration policy design problem as a Fisher information maximization (or Cramer-Rao bound minimization) problem. This formulation captures a novel trade-off in energy use, between locally collecting univariate samples and collaborating to produce multivariate samples. When knowledge of the correlation between variables is available, we analytically identify two cases: (1) where the optimal data collection policy entails investing resources to transfer information for collaborative sampling, and (2) where knowledge of the correlation between samples cannot enhance estimation efficiency. When knowledge of certain correlations is unavailable, but collaboration remains potentially beneficial, we propose novel approaches that apply multi-armed bandit algorithms to learn the optimal data collection and collaboration policy in our sequential distributed parameter estimation problem. We illustrate the effectiveness of the proposed algorithms, DOUBLE-F, DOUBLE-Z, UCB-F, UCB-Z, through simulation. | 10.1109/TNSM.2024.3468997 |
Roberto G. Pacheco, Divya J. Bajpai, Mark Shifrin, Rodrigo S. Couto, Daniel S. Menasché, Manjesh K. Hanawal, Miguel Elias M. Campista | UCBEE: A Multi Armed Bandit Approach for Early-Exit in Neural Networks | 2024 | Early Access | Image classification Image edge detection Distortion Accuracy Performance evaluation Classification algorithms Delays Proposals Neural networks Natural language processing Multi Armed Bandits Early-Exit Natural Language Processing Image Classification | Deep Neural Networks (DNNs) have demonstrated exceptional performance in diverse tasks. However, deploying DNNs on resource-constrained devices presents challenges due to energy consumption and delay overheads. To mitigate these issues, early-exit DNNs (EE-DNNs) incorporate exit branches within intermediate layers to enable early inferences. These branches estimate prediction confidence and employ a fixed threshold to determine early termination. Nonetheless, fixed thresholds yield suboptimal performance in dynamic contexts, where context refers to distortions caused by environmental conditions, in image classification, or variations in input distribution due to concept drift, in NLP. In this article, we introduce Upper Confidence Bound in EE-DNNs (), an online algorithm that dynamically adjusts early exit thresholds based on context. leverages confidence levels at intermediate layers and learns without the need for true labels. Through extensive experiments in image classification and NLP, we demonstrate that achieves logarithmic regret, converging after just a few thousand observations across multiple contexts. We evaluate for image classification and text mining. In the latter, we show that can reduce cumulative regret and lower latency by approximately 10%–20% without compromising accuracy when compared to fixed threshold alternatives. Our findings highlight as an effective method for enhancing EE-DNN efficiency. | 10.1109/TNSM.2024.3479076 |
Qianwei Meng, Qingjun Yuan, Weina Niu, Yongjuan Wang, Siqi Lu, Guangsong Li, Xiangbin Wang, Wenqi He | IIT: Accurate Decentralized Application Identification Through Mining Intra-and Inter-Flow Relationships | 2024 | Early Access | Decentralized applications Cryptography Feature extraction Fingerprint recognition Mobile applications Convolutional neural networks Radio frequency Accuracy Transformers Adaptation models Decentralized applications encrypted traffic blockchain transformer deep learning | Identifying Decentralized Applications (DApps) from encrypted network traffic plays an important role in areas such as network management and threat detection. However, DApps deployed on the same platform use the same encryption settings, resulting in DApps generating encrypted traffic with great similarity. In addition, existing flow-based methods only consider each flow as an isolated individual and feed it sequentially into the neural network for feature extraction, ignoring other rich information introduced between flows, and therefore the relationship between different flows is not effectively utilized. In this study, we propose a novel encrypted traffic classification model IIT to heterogeneously mine the potential features of intra-and inter-flows, which contain two types of encoders based on the multi-head self-attention mechanism. By combining the complementary intra-and inter-flow perspectives, the entire process of information flow can be more completely understood and described. IIT provides a more complete perspective on network flows, with the intra-flow perspective focusing on information transfer between different packets within a flow, and the inter-flow perspective placing more emphasis on information interaction between different flows. We captured 44 classes of DApps in the real world and evaluated the IIT model on two datasets, including DApps and malicious traffic classification tasks. The results demonstrate that the IIT model achieves a classification accuracy of greater than 97% on the real-world dataset of 44 DApps, outperforming other state-of-the-art methods. In addition, the IIT model exhibits good generalization in the malicious traffic classification task. | 10.1109/TNSM.2024.3479150 |
Hiba Hojeij, Mahdi Sharara, Sahar Hoteit, Véronique Vèque | On Flexible Placement of O-CU and O-DU Functionalities in Open-RAN Architecture | 2024 | Early Access | Open RAN Cloud computing Computer architecture Costs Solid modeling Servers Resource management Admittance Delays Biological system modeling Open RAN Resource Allocation Operations Research Simulation Deep Learning RNN | Open Radio Access Network (O-RAN) has recently emerged as a new trend for mobile network architecture. It is based on four founding principles: disaggregation, intelligence, virtualization, and open interfaces. In particular, RAN disaggregation involves dividing base station virtualized networking functions (VNFs) into three distinct components -the Open-Central Unit (O-CU), the Open-Distributed Unit (O-DU), and the Open-Radio Unit (O-RU) -enabling each component to be implemented independently. Such disaggregation improves system performance and allows rapid and open innovation in many components while ensuring multi-vendor operability. As the disaggregation of network architecture becomes a key enabler of O-RAN, the deployment scenarios of VNFs on O-RAN clouds become critical. In this context, we propose an optimal and dynamic placement scheme of the O-CU and O-DU functionalities on the edge or in regional O-clouds. The objective is to maximize users’ admittance ratio by considering mid-haul delay and server capacity requirements. We develop an Integer Linear Programming (ILP) model for O-CU and O-DU placement in O-RAN architecture. Additionally, we introduce a Recurrent Neural Network (RNN) heuristic model that can effectively emulate the behavior of the ILP model. The results are promising in terms of improving users’ admittance ratio by up to 10% when compared to baselines from state-of-the-art. Moreover, our proposed model minimizes the deployment costs and increases the overall throughput. Furthermore, we assess the optimal model’s performance across diverse network conditions, including variable functional split options, link capacity bottlenecks, and channel bandwidth limitations. Our analysis delves into placement decisions, evaluating admittance ratio, radio and link resource utilization, and quantifying the impact on different service types. | 10.1109/TNSM.2024.3476939 |
Daniel Ayepah Mensah, Guolin Sun, Gordon Owusu Boateng, Guisong Liu | Federated Policy Distillation for Digital Twin-Enabled Intelligent Resource Trading in 5G Network Slicing | 2024 | Early Access | Indium phosphide III-V semiconductor materials Resource management Collaboration Adaptation models Games Dynamic scheduling Pricing Heuristic algorithms Data models Deep reinforcement learning digital twin federated policy distillation Radio Access Network (RAN) slicing Resource trading | Resource sharing in radio access networks (RAN) can be conceptualized as a resource trading process between infrastructure providers (InPs) and multiple mobile virtual network operators (MVNO), where InPs lease essential network resources, such as spectrum and infrastructure, to MVNOs. Given the dynamic nature of RANs, deep reinforcement learning (DRL) is a more suitable approach to decision-making and resource optimization that ensures adaptive and efficient resource allocation strategies. In RAN slicing, DRL struggles due to imbalanced data distribution and reliance on high-quality training data. In addition, the trade-off between the global solution and individual agent goals can lead to oscillatory behavior, preventing convergence to an optimal solution. Therefore, we propose a collaborative intelligent resource trading framework with a graph-based digital twin (DT) for multiple InPs and MVNOs based on Federated DRL. First, we present a customized mutual policy distillation scheme for resource trading, where complex MVNO teacher policies are distilled into InP student models and vice versa. This mutual distillation encourages collaboration to achieve personalized resource trading decisions that reach the optimal local and global solution. Second, the DT uses a graph-based model to capture the dynamic interactions between InPs and MVNOs to improve resource-trade decisions. DT can accurately predict resource prices and demand from MVNO to provide high-quality training data. In addition, DT identifies the underlying patterns and trends through advanced analytics, enabling proactive resource allocation and pricing strategies. The simulation results and analysis confirm the effectiveness and robustness of the proposed framework to an unbalanced data distribution. | 10.1109/TNSM.2024.3476480 |
Yonghan Wu, Jin Li, Min Zhang, Bing Ye, Xiongyan Tang | A Comprehensive and Efficient Topology Representation in Routing Computation for Large-Scale Transmission Networks | 2024 | Early Access | Routing Network topology Topology Quality of service Heuristic algorithms Delays Computational modeling Computational efficiency Bandwidth Satellites Large-scale transmission networks quality of service network topology multi-factor assessment routing computation | Large-scale transmission network (LSTN) puts forward high requirements to 6G in quality of service (QoS). In the LSTN, bounded and low delay, low packet loss rates, and controllable bandwidth are required to provide guaranteed QoS, involving techniques from the network layer and physical layer. In those techniques, routing computation is one of the fundamental problems to ensure high QoS, especially for bounded and low delay. Routing computation in LSTN researches include the routing recovery based on searching and pruning strategies, individual-component routing and fiber connections, and multi-point relaying (MRP)-based topology and routing selection. However, these schemes reduce the routing time only through simple topological pruning or linear constraints, which is unsuitable for efficient routing in LSTN with increasing scales and dynamics. In this paper, an efficient and comprehensive {routing computation algorithm namely multi-factor assessment and compression for network topologies (MC) is proposed. Multiple parameters from nodes and links in networks are jointly assessed, and topology compression for network topologies is executed based on MC to accelerate routing computation. Simulation results show that MC brings space complexity but reduces time cost of routing computation obviously. In larger network topologies, compared with classic and advanced routing algorithms, the higher performance improvement about routing computation time, the number of transmitted service, average throughput of single routing, and packet loss rates of MC-based routing algorithms are realized, which has potentials to meet the high QoS requirements in LSTN. | 10.1109/TNSM.2024.3476138 |
Feng Zhou, Kefeng Guo, Gaojian Huang, Xingwang Li, Evangelos K. Markakis, Ilias Politis, Muhammad Asif | Performance Evaluations for RIS-Aided Satellite Aerial Terrestrial Integrated Networks With Link Selection Scheme and Practical Limitations | 2024 | Early Access | Relays Interference System performance Satellites Satellite broadcasting Wireless communication Reviews Rayleigh channels Autonomous aerial vehicles Physics Satellite aerial terrestrial integrated networks reconfigurable intelligent surface (RIS) practical limitations system performance | This paper researches the system evaluations of the reconfigurable intelligent surface (RIS)-assisted satellite aerial terrestrial integrated systems. To ensure the stability of the regarded network, a link selection scheme is presented to get the balance between the system performance and the system efficiency. Besides, in order to build a practical environment of the transmission networks, the imperfect hardware, channel estimation errors and co-channel interference are both considered in the networks. Relied on the above considerations, the detailed analysis for the outage behaviors is shown along with the asymptotic outage probability in high signal-to-noise ratio scenarios. Moreover, the diversity order and coding gain are also provided to give fast methods to confirm the system evaluation. Finally, some re-presentative simulations are provided to confirm the efficiency and advantage of analytical results and the proposed link selection scheme. | 10.1109/TNSM.2024.3476146 |