Last updated: 2024-11-21 04:01 UTC
All documents
Number of pages: 130
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Yu-Zhen Janice Chen, Daniel S. Menasché, Don Towsley | On Collaboration in Distributed Parameter Estimation With Resource Constraints | 2024 | Early Access | Collaboration Estimation Data collection Correlation Distributed databases Parameter estimation Optimization Vectors Wireless sensor networks Resource management Distributed Parameter Estimation Sequential Estimation Sensor Selection Vertically Partitioned Data Fisher Information Multi-Armed Bandit (MAB) Kalman Filter | Effective resource allocation in sensor networks, IoT systems, and distributed computing is essential for applications such as environmental monitoring, surveillance, and smart infrastructure. Sensors or agents must optimize their resource allocation to maximize the accuracy of parameter estimation. In this work, we consider a group of sensors or agents, each sampling from a different variable of a multivariate Gaussian distribution and having a different estimation objective. We formulate a sensor or agent’s data collection and collaboration policy design problem as a Fisher information maximization (or Cramer-Rao bound minimization) problem. This formulation captures a novel trade-off in energy use, between locally collecting univariate samples and collaborating to produce multivariate samples. When knowledge of the correlation between variables is available, we analytically identify two cases: (1) where the optimal data collection policy entails investing resources to transfer information for collaborative sampling, and (2) where knowledge of the correlation between samples cannot enhance estimation efficiency. When knowledge of certain correlations is unavailable, but collaboration remains potentially beneficial, we propose novel approaches that apply multi-armed bandit algorithms to learn the optimal data collection and collaboration policy in our sequential distributed parameter estimation problem. We illustrate the effectiveness of the proposed algorithms, DOUBLE-F, DOUBLE-Z, UCB-F, UCB-Z, through simulation. | 10.1109/TNSM.2024.3468997 |
Malte Tashiro, Emile Aben, Romain Fontugne | Metis: Selecting Diverse Atlas Vantage Points | 2024 | Early Access | Probes Windows IP networks Topology Internet Particle measurements Data mining Atmospheric measurements Buildings Writing Bias Internet measurements Open Access RIPE Atlas vantage point selection | The popularity of the RIPE Atlas measurement platform comes primarily from its openness and unprecedented scale. The platform provides users with over ten thousand vantage points, called probes, and is usually considered as giving a reasonably faithful view of the Internet. A good use of Atlas, however, requires a clear understanding of its limitations and bias. In this work we highlight the influence of probe locations on Atlas measurements and advocate the importance of selecting a diverse set of probes for fair measurements. We propose Metis, a data-driven probe selection method, that picks a diverse set of probes based on topological properties (e.g., round-trip time or AS-path length). Using real experiments we show that, compared to Atlas’ default probe selection, Metis’ probe selections collect more comprehensive measurement results in terms of geographical, topological, RIR, and industry-type coverage. Metis triples the number of probes from the underrepresented AFRINIC and LACNIC regions, and improves geographical diversity by increasing the number of unique countries included in the probe set by up to 59%. In addition, we extend Metis to identify locations on the Internet where new probes would be the most beneficial for improving Atlas’ footprint. Finally, we present a website where we publish periodically updated results and provide easy integration of Metis’ selections with Atlas. | 10.1109/TNSM.2024.3470989 |
Hai Zhu, Xingsi Xue, Mengmeng Xu, Byung-Gyu Kim, Xiaohong Lyu, Shalli Rani | Zero-Trust Blockchain-Enabled Secure Next-Generation Healthcare Communication Network | 2024 | Early Access | Security Medical services Next generation networking Access control Codes Authentication Blockchains Trajectory Logic gates Telemedicine Zero-trust security healthcare communication network next-generation networking blockchain | Conventional security architectures and models are considered single-network architecture solutions, which assume that devices authenticated within the network are implicitly trusted. However, such an approach is unsuitable for next-generation networks (NGNs). Zero-trust security was introduced to overcome these challenges using context-aware, dynamic, and intelligent authentication schemes. This paper proposes a novel zero-trust blockchain-enabled framework for secure next-generation healthcare communication network (HCN). The proposed framework integrates zero-trust and blockchain to provide a decentralized, secure, and intelligent solution for healthcare communication in NGNs. The system model comprises three components: HCN user identity modeling, blockchain and risk assessment-based access control, and dynamic trust gateway. The user identity modeling component utilizes attribute-based user behavior trajectory features, while the access control component leverages smart contracts-based risk assessment. The dynamic trust gateway component employs a consensus mechanism to achieve dynamic gateway switching and enhance network resilience. Simulation results demonstrate that the proposed framework achieves 31% lower calculation delays, 3% higher trust values, and 3% better attack detection accuracy compared to best baseline methods. It also exhibits a 2% improvement in access control granularity and maintains 95% network throughput under various failure scenarios. | 10.1109/TNSM.2024.3473016 |
Xin Tang, Luchao Jin, Jing Bai, Linjie Shi, Yudan Zhu, Ting Cui | Key Transferring-Based Secure Deduplication for Cloud Storage With Resistance Against Brute-Force Attacks | 2024 | Early Access | Cryptography Encryption Indexes Brute force attacks Cloud computing Servers Data privacy Resists Side-channel attacks Privacy Brute-force attacks cross-user deduplication cloud storage privacy key transferring | Convergent encryption is an effective technique to achieve cross-user deduplication of encrypted data in cloud storage. However, it is vulnerable to brute-force attacks for data with low min-entropy. Moreover, once the content of the target data is successfully constructed through the aforementioned attacks, the corresponding index can also be obtained, leading to the risk of violating privacy during the process of data downloading. To address these challenges, we propose a key transferring-based secure deduplication (KTSD) scheme for cloud storage with support for ownership verification, which significantly improves the security against brute-force attacks during the ciphertext deduplication and downloading. Specifically, we introduce a randomly generated key in data encryption and downloading index generation to prevent the results from being inferred. And define a deduplication request index and a key request index by using the bloom filter to achieve brute-force attack resistant key transferring. An RSA-based ownership verification scheme is designed for the downloading process to effectively prevent privacy leakage. Finally, we prove the security of our schemes by security analysis and perform the performance evaluation experiments, the results of which show that compared to the state-of-the art, the cloud storage overhead can be reduced by 6.01% to 20.49% under KTSD. | 10.1109/TNSM.2024.3474852 |
Kai Zhao, Xiaowei Chuo, Fangchao Yu, Bo Zeng, Zhi Pang, Lina Wang | SplitAUM: Auxiliary Model-Based Label Inference Attack Against Split Learning | 2024 | Early Access | Servers Data models Computational modeling Data privacy Training Protocols Privacy Protection Information security Image reconstruction Deep learning label inference attack split learning federated learning clustering | Split learning has emerged as a practical and efficient privacy-preserving distributed machine learning paradigm. Understanding the privacy risks of split learning is critical for its application in privacy-sensitive scenarios. However, previous attacks against split learning generally depended on unduly strong assumptions or non-standard settings advantageous to the attacker. This paper proposes a novel auxiliary model-based label inference attack framework against learning, named SplitAUM. SplitAUM first builds an auxiliary model on the client side using intermediate representations of the cut layer and a small number of dummy labels. Then, the learning regularization objective is carefully designed to train the auxiliary model and transfer the knowledge of the server model to the client. Finally, SplitAUM uses the auxiliary model output on local data to infer the server’s privacy label. In addition, to further improve the attack effect, we use semi-supervised clustering to initialize the dummy labels of the auxiliary model. Since SplitAUM relies only on auxiliary models, it is highly scalable. We conduct extensive experiments on three different categories of datasets, comparing four typical attacks. Experimental results demonstrate that SplitAUM can effectively infer privacy labels and outperform existing attack frameworks in challenging yet practical scenarios. We hope our work paves the way for future analyses of the security of split learning. | 10.1109/TNSM.2024.3474717 |
Feng Zhou, Kefeng Guo, Gaojian Huang, Xingwang Li, Evangelos K. Markakis, Ilias Politis, Muhammad Asif | Performance Evaluations for RIS-Aided Satellite Aerial Terrestrial Integrated Networks With Link Selection Scheme and Practical Limitations | 2024 | Early Access | Relays Interference System performance Satellites Satellite broadcasting Wireless communication Reviews Rayleigh channels Autonomous aerial vehicles Physics Satellite aerial terrestrial integrated networks reconfigurable intelligent surface (RIS) practical limitations system performance | This paper researches the system evaluations of the reconfigurable intelligent surface (RIS)-assisted satellite aerial terrestrial integrated systems. To ensure the stability of the regarded network, a link selection scheme is presented to get the balance between the system performance and the system efficiency. Besides, in order to build a practical environment of the transmission networks, the imperfect hardware, channel estimation errors and co-channel interference are both considered in the networks. Relied on the above considerations, the detailed analysis for the outage behaviors is shown along with the asymptotic outage probability in high signal-to-noise ratio scenarios. Moreover, the diversity order and coding gain are also provided to give fast methods to confirm the system evaluation. Finally, some re-presentative simulations are provided to confirm the efficiency and advantage of analytical results and the proposed link selection scheme. | 10.1109/TNSM.2024.3476146 |
Yonghan Wu, Jin Li, Min Zhang, Bing Ye, Xiongyan Tang | A Comprehensive and Efficient Topology Representation in Routing Computation for Large-Scale Transmission Networks | 2024 | Early Access | Routing Network topology Topology Quality of service Heuristic algorithms Delays Computational modeling Computational efficiency Bandwidth Satellites Large-scale transmission networks quality of service network topology multi-factor assessment routing computation | Large-scale transmission network (LSTN) puts forward high requirements to 6G in quality of service (QoS). In the LSTN, bounded and low delay, low packet loss rates, and controllable bandwidth are required to provide guaranteed QoS, involving techniques from the network layer and physical layer. In those techniques, routing computation is one of the fundamental problems to ensure high QoS, especially for bounded and low delay. Routing computation in LSTN researches include the routing recovery based on searching and pruning strategies, individual-component routing and fiber connections, and multi-point relaying (MRP)-based topology and routing selection. However, these schemes reduce the routing time only through simple topological pruning or linear constraints, which is unsuitable for efficient routing in LSTN with increasing scales and dynamics. In this paper, an efficient and comprehensive {routing computation algorithm namely multi-factor assessment and compression for network topologies (MC) is proposed. Multiple parameters from nodes and links in networks are jointly assessed, and topology compression for network topologies is executed based on MC to accelerate routing computation. Simulation results show that MC brings space complexity but reduces time cost of routing computation obviously. In larger network topologies, compared with classic and advanced routing algorithms, the higher performance improvement about routing computation time, the number of transmitted service, average throughput of single routing, and packet loss rates of MC-based routing algorithms are realized, which has potentials to meet the high QoS requirements in LSTN. | 10.1109/TNSM.2024.3476138 |
Daniel Ayepah Mensah, Guolin Sun, Gordon Owusu Boateng, Guisong Liu | Federated Policy Distillation for Digital Twin-Enabled Intelligent Resource Trading in 5G Network Slicing | 2024 | Early Access | Indium phosphide III-V semiconductor materials Resource management Collaboration Adaptation models Games Dynamic scheduling Pricing Heuristic algorithms Data models Deep reinforcement learning digital twin federated policy distillation Radio Access Network (RAN) slicing Resource trading | Resource sharing in radio access networks (RAN) can be conceptualized as a resource trading process between infrastructure providers (InPs) and multiple mobile virtual network operators (MVNO), where InPs lease essential network resources, such as spectrum and infrastructure, to MVNOs. Given the dynamic nature of RANs, deep reinforcement learning (DRL) is a more suitable approach to decision-making and resource optimization that ensures adaptive and efficient resource allocation strategies. In RAN slicing, DRL struggles due to imbalanced data distribution and reliance on high-quality training data. In addition, the trade-off between the global solution and individual agent goals can lead to oscillatory behavior, preventing convergence to an optimal solution. Therefore, we propose a collaborative intelligent resource trading framework with a graph-based digital twin (DT) for multiple InPs and MVNOs based on Federated DRL. First, we present a customized mutual policy distillation scheme for resource trading, where complex MVNO teacher policies are distilled into InP student models and vice versa. This mutual distillation encourages collaboration to achieve personalized resource trading decisions that reach the optimal local and global solution. Second, the DT uses a graph-based model to capture the dynamic interactions between InPs and MVNOs to improve resource-trade decisions. DT can accurately predict resource prices and demand from MVNO to provide high-quality training data. In addition, DT identifies the underlying patterns and trends through advanced analytics, enabling proactive resource allocation and pricing strategies. The simulation results and analysis confirm the effectiveness and robustness of the proposed framework to an unbalanced data distribution. | 10.1109/TNSM.2024.3476480 |
Hiba Hojeij, Mahdi Sharara, Sahar Hoteit, Véronique Vèque | On Flexible Placement of O-CU and O-DU Functionalities in Open-RAN Architecture | 2024 | Early Access | Open RAN Cloud computing Computer architecture Costs Solid modeling Servers Resource management Admittance Delays Biological system modeling Open RAN Resource Allocation Operations Research Simulation Deep Learning RNN | Open Radio Access Network (O-RAN) has recently emerged as a new trend for mobile network architecture. It is based on four founding principles: disaggregation, intelligence, virtualization, and open interfaces. In particular, RAN disaggregation involves dividing base station virtualized networking functions (VNFs) into three distinct components -the Open-Central Unit (O-CU), the Open-Distributed Unit (O-DU), and the Open-Radio Unit (O-RU) -enabling each component to be implemented independently. Such disaggregation improves system performance and allows rapid and open innovation in many components while ensuring multi-vendor operability. As the disaggregation of network architecture becomes a key enabler of O-RAN, the deployment scenarios of VNFs on O-RAN clouds become critical. In this context, we propose an optimal and dynamic placement scheme of the O-CU and O-DU functionalities on the edge or in regional O-clouds. The objective is to maximize users’ admittance ratio by considering mid-haul delay and server capacity requirements. We develop an Integer Linear Programming (ILP) model for O-CU and O-DU placement in O-RAN architecture. Additionally, we introduce a Recurrent Neural Network (RNN) heuristic model that can effectively emulate the behavior of the ILP model. The results are promising in terms of improving users’ admittance ratio by up to 10% when compared to baselines from state-of-the-art. Moreover, our proposed model minimizes the deployment costs and increases the overall throughput. Furthermore, we assess the optimal model’s performance across diverse network conditions, including variable functional split options, link capacity bottlenecks, and channel bandwidth limitations. Our analysis delves into placement decisions, evaluating admittance ratio, radio and link resource utilization, and quantifying the impact on different service types. | 10.1109/TNSM.2024.3476939 |
Qianwei Meng, Qingjun Yuan, Weina Niu, Yongjuan Wang, Siqi Lu, Guangsong Li, Xiangbin Wang, Wenqi He | IIT: Accurate Decentralized Application Identification Through Mining Intra- and Inter-Flow Relationships | 2024 | Early Access | Decentralized applications Cryptography Feature extraction Fingerprint recognition Mobile applications Convolutional neural networks Radio frequency Accuracy Transformers Adaptation models Decentralized applications encrypted traffic blockchain transformer deep learning | Identifying Decentralized Applications (DApps) from encrypted network traffic plays an important role in areas such as network management and threat detection. However, DApps deployed on the same platform use the same encryption settings, resulting in DApps generating encrypted traffic with great similarity. In addition, existing flow-based methods only consider each flow as an isolated individual and feed it sequentially into the neural network for feature extraction, ignoring other rich information introduced between flows, and therefore the relationship between different flows is not effectively utilized. In this study, we propose a novel encrypted traffic classification model IIT to heterogeneously mine the potential features of intra-and inter-flows, which contain two types of encoders based on the multi-head self-attention mechanism. By combining the complementary intra-and inter-flow perspectives, the entire process of information flow can be more completely understood and described. IIT provides a more complete perspective on network flows, with the intra-flow perspective focusing on information transfer between different packets within a flow, and the inter-flow perspective placing more emphasis on information interaction between different flows. We captured 44 classes of DApps in the real world and evaluated the IIT model on two datasets, including DApps and malicious traffic classification tasks. The results demonstrate that the IIT model achieves a classification accuracy of greater than 97% on the real-world dataset of 44 DApps, outperforming other state-of-the-art methods. In addition, the IIT model exhibits good generalization in the malicious traffic classification task. | 10.1109/TNSM.2024.3479150 |
Roberto G. Pacheco, Divya J. Bajpai, Mark Shifrin, Rodrigo S. Couto, Daniel S. Menasché, Manjesh K. Hanawal, Miguel Elias M. Campista | UCBEE: A Multi Armed Bandit Approach for Early-Exit in Neural Networks | 2024 | Early Access | Image classification Image edge detection Distortion Accuracy Performance evaluation Classification algorithms Delays Proposals Neural networks Natural language processing Multi Armed Bandits Early-Exit Natural Language Processing Image Classification | Deep Neural Networks (DNNs) have demonstrated exceptional performance in diverse tasks. However, deploying DNNs on resource-constrained devices presents challenges due to energy consumption and delay overheads. To mitigate these issues, early-exit DNNs (EE-DNNs) incorporate exit branches within intermediate layers to enable early inferences. These branches estimate prediction confidence and employ a fixed threshold to determine early termination. Nonetheless, fixed thresholds yield suboptimal performance in dynamic contexts, where context refers to distortions caused by environmental conditions, in image classification, or variations in input distribution due to concept drift, in NLP. In this article, we introduce Upper Confidence Bound in EE-DNNs (), an online algorithm that dynamically adjusts early exit thresholds based on context. leverages confidence levels at intermediate layers and learns without the need for true labels. Through extensive experiments in image classification and NLP, we demonstrate that achieves logarithmic regret, converging after just a few thousand observations across multiple contexts. We evaluate for image classification and text mining. In the latter, we show that can reduce cumulative regret and lower latency by approximately 10%–20% without compromising accuracy when compared to fixed threshold alternatives. Our findings highlight as an effective method for enhancing EE-DNN efficiency. | 10.1109/TNSM.2024.3479076 |
Yanfei Wu, Liang Liang, Yunjian Jia, Wanli Wen | HFL-TranWGAN: Knowledge-Driven Cross-Domain Collaborative Anomaly Detection for End-to-End Network Slicing | 2024 | Early Access | Anomaly detection Network slicing Collaboration Hidden Markov models Knowledge engineering Distributed databases Training Federated learning 3GPP Support vector machines End-to-end network slicing collaborative anomaly detection knowledge-driven generating adversarial networks hierarchical federated learning | Network slicing is a key technology that can provide service assurance for the heterogeneous application scenarios emerging in the next-generation networks. However, the heterogeneity and complexity of virtualized end-to-end network slicing environments pose challenges for network security operations and management. In this paper, we propose a knowledge-driven cross-domain collaborative anomaly detection scheme for end-to-end network slicing, namely HFL-TranWGAN. Specifically, we first design a hierarchical management framework that performs three-tier hierarchical intelligent management of end-to-end network slices, while introducing a knowledge plane to assist the management plane in making intelligent decisions. Then, we develop a knowledge-driven sub-slice anomaly detection model, the conditional TranWGAN model, in which an encoder, a generator, and multiple discriminators perform adversarial learning simultaneously. Finally, taking the sub-slice anomaly detection model as the basic training model, we utilize hierarchical federated learning to achieve inter-slice and intra-slice collaborative anomaly detection. We calculate the anomaly scores through the discrimination error and reconstruction error to obtain the anomaly detection results. Simulation results on two real-world datasets show that the proposed HFL-TranWGAN scheme performs better in anomaly detection performance such as F1 score and precision compared to the benchmark methods. Specifically, HFL-TranWGAN improved precision by up to 8.53% and F1 score by up to 1.88% compared to benchmarks.” | 10.1109/TNSM.2024.3471808 |
Krishna Pal Thakur, Basabdatta Palit | A QoS-Aware Uplink Spectrum and Power Allocation With Link Adaptation for Vehicular Communications in 5G Networks | 2024 | Early Access | Resource management 5G mobile communication Quality of service Interference Delays Bandwidth Uplink Vehicular ad hoc networks Power control Modulation Resource Allocation Vehicle-to-Vehicle C-V2X 5G Link Adaptation 28GHz Hungarian Multi-Numerology | In this work, we have proposed link adaptation-based spectrum and power allocation algorithms for the uplink communication in 5G Cellular Vehicle-to-Everything (C-V2X) systems. In C-V2X, vehicle-to-vehicle (V2V) users share radio resources with vehicle-to-infrastructure (V2I) users. Existing works primarily focus on the optimal pairing of V2V and V2I users, assuming that each V2I user needs a single resource block (RB) while minimizing interference through power allocation. In contrast, in this work, we have considered that the number of RBs needed by the users is a function of their channel condition and Quality of Service (QoS) -a method called link adaptation. It effectively compensates for the frequent channel quality fluctuations at the high frequencies of 5G communication. 5G uses a multi-numerology frame structure to support diverse QoS requirements, which has also been considered in this work. The first algorithm proposed in this article greedily allocates RBs to V2I users using link adaptation. It then uses the Hungarian algorithm to pair V2V with V2I users while minimizing interference through power allocation. The second proposed method groups RBs into resource chunks (RCs) and uses the Hungarian algorithm twice -first to allocate RCs to V2I users and then to pair V2I users with V2V users. Extensive simulations reveal that link adaptation increases the number of satisfied V2I users and their sum rate while also improving the QoS of V2I and V2V users, making it indispensable for 5G C-V2X systems. | 10.1109/TNSM.2024.3479870 |
Niloy Saha, Nashid Shahriar, Muhammad Sulaiman, Noura Limam, Raouf Boutaba, Aladdin Saleh | Monarch: Monitoring Architecture for 5G and Beyond Network Slices | 2024 | Early Access | Monitoring 5G mobile communication Computer architecture Accuracy Network slicing Scalability Data mining Containers Adaptive systems 3GPP 5G Network Slicing KPI Monitoring Open5GS | Data-driven algorithms play a pivotal role in the automated orchestration and management of network slices in 5G and beyond networks, however, their efficacy hinges on the timely and accurate monitoring of the network and its components. To support 5G slicing, monitoring must be comprehensive and encompass network slices end-to-end (E2E). Yet, several challenges arise with E2E network slice monitoring. Firstly, existing solutions are piecemeal and cannot correlate network-wide data from multiple sources (e.g., different network segments). Secondly, different slices can have different requirements regarding Key Performance Indicators (KPIs) and monitoring granularity, which necessitates dynamic adjustments in both KPI monitoring and data collection rates in real-time to minimize network resource overhead. To address these challenges, in this paper, we present Monarch, a scalable monitoring architecture for 5G. Monarch is designed for cloud-native 5G deployments and focuses on network slice monitoring and per-slice KPI computation. We validate the proposed architecture by implementing Monarch on a 5G network slice testbed, with up to 50 network slices. We exemplify Monarch’s role in 5G network monitoring by showcasing two scenarios: monitoring KPIs at both slice and network function levels. Our evaluations demonstrate Monarch’s scalability, with the architecture adeptly handling varying numbers of slices while maintaining consistent ingestion times between 2.25 to 2.75 ms. Furthermore, we showcase the effectiveness of Monarch’s adaptive monitoring mechanism, exemplified by a simple heuristic, on a real-world 5G dataset. The adaptive monitoring mechanism significantly reduces the overhead of network slice monitoring by up to 76% while ensuring acceptable accuracy. | 10.1109/TNSM.2024.3479246 |
Endri Goshi, Fidan Mehmeti, Thomas F. La Porta, Wolfgang Kellerer | Modeling and Analysis of mMTC Traffic in 5G Core Networks | 2024 | Early Access | Traffic control 5G mobile communication Planning Predictive models Ultra reliable low latency communication Radio access networks Computer architecture Communication networks Time-frequency analysis Temperature measurement Traffic characteristics 5G mMTC RAN Core Network | Massive Machine-Type Communications (mMTC) are one of the three main use cases powered by 5G and beyond networks. These are distinguished by the need to serve a large number of devices which are characterized by non-intensive traffic and low energy consumption. While the sporadic nature of the mMTC traffic does not pose an exertion on the efficient operation of the network, multiplexing the traffic from a large number of these devices within the cell certainly does. This traffic from the Base Station (BS) is then transported further towards the Core Network (CN), where it is combined with the traffic from other BSs. Therefore, planning carefully the network resources, both on the Radio Access Network (RAN) and the CN, for this type of traffic is of paramount importance. To do this, the statistics of the traffic pattern that arrives at the BS and the CN should be known. To this end, in this paper, we derive first the distribution of the inter-arrival times of the traffic at the BS from a general number of mMTC users within the cell, assuming a generic distribution of the traffic pattern by individual users. Then, using the previous result we derive the distribution of the traffic pattern at the CN. Further, we validate our results on traces for channel conditions and by performing measurements in our testbed. Results show that adding more mMTC users in the cell and more BSs in the network in the long term does not increase the variability of the traffic pattern at the BS and at the CN. Furthermore, this arrival process at all points of our interest in the network is shown to be Poisson both for homogeneous and heterogeneous traffic. However, the empirical observations show that a huge number of packets is needed for this process to converge, and this number of packets increases with the number of users and/or BSs. | 10.1109/TNSM.2024.3481240 |
Yahuza Bello, Ahmed Refaey Hussein | Dynamic Policy Decision/Enforcement Security Zoning Through Stochastic Games and Meta Learning | 2024 | Early Access | Security Games Stochastic processes Next generation networking Zero Trust Metalearning NIST Reinforcement learning Heuristic algorithms Cyberattack Reinforcement learning dynamic policy stochastic games security zero trust core network entities zoning strategy zero trust architecture | Securing Next Generation Networks (NGNs) remains a prominent topic of discussion in academia and industries alike, driven by the rapid evolution of cyber attacks. As these attacks become increasingly complex and dynamic, it is crucial to develop sophisticated security strategies with automated dynamic policy enforcement. In this paper, we propose a security strategy based on the zero-trust model, incorporating dynamic policy decisions through the utilization of stochastic games and Reinforcement Learning (RL). Our approach involves the development of an attack and defense strategy evolution model, specifically tailored to combat cyber attacks in NGNs. To achieve this, we employ RL techniques to update and adapt dynamic policies. To train the agents, we utilize the Generalized Proximal Policy Optimization with sample reuse (GePPO) algorithm, including its modified version, GePPO-ML, which incorporates meta-learning to initialize the agent’s policy and parameters. Additionally, we employ the Sample Dropout PPO with meta-learning (SDPPO-ML), a modified version of the SD-PPO algorithm, to train the agents. To evaluate the performance of these algorithms, we conduct a comparative analysis against the REINFORCE and PPO algorithms. The results illustrate the superior performance of both GePPO-ML and SDPPO-ML when compared to these baseline algorithms, with GePPO-ML exhibiting the best performance. | 10.1109/TNSM.2024.3481662 |
Yujie Zhao, Tao Peng, Yichen Guo, Yijing Niu, Wenbo Wang | An Intelligent Scheme for Energy-Efficient Uplink Resource Allocation With QoS Constraints in 6G Networks | 2024 | Early Access | Interference Quality of service Optimization Resource management Femtocells Energy efficiency 6G mobile communication Training Accuracy Complexity theory Energy efficiency resource allocation quality-of-service reinforcement learning fractional programming | In sixth-generation (6G) networks, the dense deployment of femtocells will result in significant co-channel interference. However, current studies encounter difficulties in obtaining precise interference information, which poses a challenge in improving the performance of the resource allocation (RA) strategy. This paper proposes an intelligent scheme aimed at achieving energy-efficient RA in uplink scenarios with unknown interference. Firstly, a novel interference-inference-based RA (IIBRA) framework is proposed to support this scheme. In the framework, the interference relationship between users is precisely modeled by processing the historical operation data of the network. Based on the modeled interference relationship, accurate performance feedback to the RA algorithm is provided. Secondly, a joint double deep Q-network and optimization RA (DORA) algorithm is developed, which decomposes the joint allocation problem into two parts: resource block assignment and power allocation. The two parts continuously interact throughout the allocation process, leading to improved solutions. Thirdly, a new metric called effective energy efficiency (EEE) is provided, which is defined as the product of energy efficiency and average user satisfaction with quality of service (QoS). EEE is used to help train the neural networks, resulting in a superior level of user QoS satisfaction. Numerical results demonstrate that the DORA algorithm achieves a clear enhancement in interference efficiency, surpassing well-known existing algorithms with a maximum improvement of over 50%. Additionally, it achieves a maximum EEE improvement exceeding 25%. | 10.1109/TNSM.2024.3482549 |
Jun Tang, Bing Guo, Yan Shen, Sahil Garg, Georges Kaddoum, M. Shamim Hossain | A Data Completion Algorithm Based on Low-Rank Prior Knowledge for Data-Driven Applications | 2024 | Early Access | Tensors Accuracy Matrix decomposition Noise reduction Consumer electronics Deep learning Computational modeling Proposals Data mining Training Tensor Ring Completion Internet of Things Data Recovery Low-rank Prior Knowledge | Low rank tensor ring based data recovery algorithms have been widely used in data-driven consumer electronics to recover missing data entries in the collecting data pre-processing stage for providing stable and reliable service. However, traditional recovery methods often fail to utilize the abundant prior knowledge of data and the non-local self-similarity of the data, thus leading to the failure to effectively capture the spatial relationships within high-dimensional data to recover them accurately. To address these problems, we present a novel Non-local Self-similarity and Low-rank Prior Knowledge based tensor ring completion method. Firstly, we incorporate the BM3D denoising operator within a Plug-and-Play framework to exploit the self-similarity in the data. Then a logarithmic determinant function is integrated to distinguish singular values in the cyclic unfolding matrix of the tensor and adopts a tensor ring completion approach based on weighted nuclear norms. Finally, in order to evaluate the effectiveness of our proposed method, we conducted a series of experiments by using the missing image dataset and the missing traffic data dataset respectively, and the experimental results show that our method achieves the highest level in terms of data recovery accuracy. | 10.1109/TNSM.2024.3483013 |
Lifan Mei, Jinrui Gou, Jingrui Yang, Yujin Cai, Yong Liu | On Routing Optimization in Networks With Embedded Computational Services | 2024 | Early Access | Routing Computational modeling Delays Optimization Heuristic algorithms Servers Load modeling Resilience Performance evaluation Resource management Routing Edge Computing In-Network Computation Network Function Virtualization | Modern communication networks are increasingly equipped with in-network computational capabilities and services. Routing in such networks is significantly more complicated than the traditional routing. A legitimate route for a flow not only needs to have enough communication and computation resources, but also has to conform to various application-specific routing constraints. This paper presents a comprehensive study on routing optimization problems in networks with embedded computational services. We develop a set of routing optimization models and derive low-complexity heuristic routing algorithms for diverse computation scenarios. For dynamic demands, we also develop an online routing algorithm with performance guarantees. Through evaluations over emerging applications on real topologies, we demonstrate that our models can be flexibly customized to meet the diverse routing requirements of different computation applications. Our proposed heuristic algorithms significantly outperform baseline algorithms and can achieve close-to-optimal performance in various scenarios. | 10.1109/TNSM.2024.3483088 |
Bing Tang, Zhikang Wu, Wei Xu, Buqing Cao, Mingdong Tang, Qing Yang | TP-MDU: A Two-Phase Microservice Deployment Based on Minimal Deployment Unit in Edge Computing Environment | 2024 | Early Access | Microservice architectures Optimization Quality of service Dynamic scheduling Servers Reinforcement learning Resource management Cloud computing Time factors Load modeling mobile edge computing microservices minimal deployment unit two-phase deployment reinforcement learning | In mobile edge computing (MEC) environment, effective microservices deployment significantly reduces vendor costs and minimizes application latency. However, existing literatures overlook the impact of dynamic characteristics such as the frequency of user requests and geographical location, and lack in-depth consideration of the types of microservices and their interaction frequencies. To address these issues, we propose TP-MDU, a novel two-stage deployment framework for microservices. This framework is designed to learn users’ dynamic behaviors and introduces, for the first time, a minimal deployment unit. Initially, TP-MDU generates minimal deployment units online, tailored to the types of microservices and their interaction frequencies. In the initial deployment phase, aiming for load balancing, it employs a simulated annealing algorithm to achieve a superior deployment plan. During the optimization scheduling phase, it utilizes reinforcement learning algorithms and introduces dynamic information and new optimization objectives. Previous deployment plans serve as the initial state for policy learning, thus facilitating more optimal deployment decisions. This paper evaluates the performance of TP-MDU using a real dataset from Australia’s EUA and some related synthetic data. The experimental results indicate that TP-MDU outperforms other representative algorithms in performance. | 10.1109/TNSM.2024.3483634 |