Last updated: 2024-10-13 03:01 UTC
All documents
Number of pages: 128
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Yu-Zhen Janice Chen, Daniel S. Menasché, Don Towsley | On Collaboration in Distributed Parameter Estimation With Resource Constraints | 2024 | Early Access | Collaboration Estimation Data collection Correlation Distributed databases Parameter estimation Optimization Vectors Wireless sensor networks Resource management Distributed Parameter Estimation Sequential Estimation Sensor Selection Vertically Partitioned Data Fisher Information Multi-Armed Bandit (MAB) Kalman Filter | Effective resource allocation in sensor networks, IoT systems, and distributed computing is essential for applications such as environmental monitoring, surveillance, and smart infrastructure. Sensors or agents must optimize their resource allocation to maximize the accuracy of parameter estimation. In this work, we consider a group of sensors or agents, each sampling from a different variable of a multivariate Gaussian distribution and having a different estimation objective. We formulate a sensor or agent’s data collection and collaboration policy design problem as a Fisher information maximization (or Cramer-Rao bound minimization) problem. This formulation captures a novel trade-off in energy use, between locally collecting univariate samples and collaborating to produce multivariate samples. When knowledge of the correlation between variables is available, we analytically identify two cases: (1) where the optimal data collection policy entails investing resources to transfer information for collaborative sampling, and (2) where knowledge of the correlation between samples cannot enhance estimation efficiency. When knowledge of certain correlations is unavailable, but collaboration remains potentially beneficial, we propose novel approaches that apply multi-armed bandit algorithms to learn the optimal data collection and collaboration policy in our sequential distributed parameter estimation problem. We illustrate the effectiveness of the proposed algorithms, DOUBLE-F, DOUBLE-Z, UCB-F, UCB-Z, through simulation. | 10.1109/TNSM.2024.3468997 |
Roberto G. Pacheco, Divya J. Bajpai, Mark Shifrin, Rodrigo S. Couto, Daniel S. Menasché, Manjesh K. Hanawal, Miguel Elias M. Campista | UCBEE: A Multi Armed Bandit Approach for Early-Exit in Neural Networks | 2024 | Early Access | Image classification Image edge detection Distortion Accuracy Performance evaluation Classification algorithms Delays Proposals Neural networks Natural language processing Multi Armed Bandits Early-Exit Natural Language Processing Image Classification | Deep Neural Networks (DNNs) have demonstrated exceptional performance in diverse tasks. However, deploying DNNs on resource-constrained devices presents challenges due to energy consumption and delay overheads. To mitigate these issues, early-exit DNNs (EE-DNNs) incorporate exit branches within intermediate layers to enable early inferences. These branches estimate prediction confidence and employ a fixed threshold to determine early termination. Nonetheless, fixed thresholds yield suboptimal performance in dynamic contexts, where context refers to distortions caused by environmental conditions, in image classification, or variations in input distribution due to concept drift, in NLP. In this article, we introduce Upper Confidence Bound in EE-DNNs (), an online algorithm that dynamically adjusts early exit thresholds based on context. leverages confidence levels at intermediate layers and learns without the need for true labels. Through extensive experiments in image classification and NLP, we demonstrate that achieves logarithmic regret, converging after just a few thousand observations across multiple contexts. We evaluate for image classification and text mining. In the latter, we show that can reduce cumulative regret and lower latency by approximately 10%–20% without compromising accuracy when compared to fixed threshold alternatives. Our findings highlight as an effective method for enhancing EE-DNN efficiency. | 10.1109/TNSM.2024.3479076 |
Qianwei Meng, Qingjun Yuan, Weina Niu, Yongjuan Wang, Siqi Lu, Guangsong Li, Xiangbin Wang, Wenqi He | IIT: Accurate Decentralized Application Identification Through Mining Intra-and Inter-Flow Relationships | 2024 | Early Access | Decentralized applications Cryptography Feature extraction Fingerprint recognition Mobile applications Convolutional neural networks Radio frequency Accuracy Transformers Adaptation models Decentralized applications encrypted traffic blockchain transformer deep learning | Identifying Decentralized Applications (DApps) from encrypted network traffic plays an important role in areas such as network management and threat detection. However, DApps deployed on the same platform use the same encryption settings, resulting in DApps generating encrypted traffic with great similarity. In addition, existing flow-based methods only consider each flow as an isolated individual and feed it sequentially into the neural network for feature extraction, ignoring other rich information introduced between flows, and therefore the relationship between different flows is not effectively utilized. In this study, we propose a novel encrypted traffic classification model IIT to heterogeneously mine the potential features of intra-and inter-flows, which contain two types of encoders based on the multi-head self-attention mechanism. By combining the complementary intra-and inter-flow perspectives, the entire process of information flow can be more completely understood and described. IIT provides a more complete perspective on network flows, with the intra-flow perspective focusing on information transfer between different packets within a flow, and the inter-flow perspective placing more emphasis on information interaction between different flows. We captured 44 classes of DApps in the real world and evaluated the IIT model on two datasets, including DApps and malicious traffic classification tasks. The results demonstrate that the IIT model achieves a classification accuracy of greater than 97% on the real-world dataset of 44 DApps, outperforming other state-of-the-art methods. In addition, the IIT model exhibits good generalization in the malicious traffic classification task. | 10.1109/TNSM.2024.3479150 |
Hiba Hojeij, Mahdi Sharara, Sahar Hoteit, Véronique Vèque | On Flexible Placement of O-CU and O-DU Functionalities in Open-RAN Architecture | 2024 | Early Access | Open RAN Cloud computing Computer architecture Costs Solid modeling Servers Resource management Admittance Delays Biological system modeling Open RAN Resource Allocation Operations Research Simulation Deep Learning RNN | Open Radio Access Network (O-RAN) has recently emerged as a new trend for mobile network architecture. It is based on four founding principles: disaggregation, intelligence, virtualization, and open interfaces. In particular, RAN disaggregation involves dividing base station virtualized networking functions (VNFs) into three distinct components -the Open-Central Unit (O-CU), the Open-Distributed Unit (O-DU), and the Open-Radio Unit (O-RU) -enabling each component to be implemented independently. Such disaggregation improves system performance and allows rapid and open innovation in many components while ensuring multi-vendor operability. As the disaggregation of network architecture becomes a key enabler of O-RAN, the deployment scenarios of VNFs on O-RAN clouds become critical. In this context, we propose an optimal and dynamic placement scheme of the O-CU and O-DU functionalities on the edge or in regional O-clouds. The objective is to maximize users’ admittance ratio by considering mid-haul delay and server capacity requirements. We develop an Integer Linear Programming (ILP) model for O-CU and O-DU placement in O-RAN architecture. Additionally, we introduce a Recurrent Neural Network (RNN) heuristic model that can effectively emulate the behavior of the ILP model. The results are promising in terms of improving users’ admittance ratio by up to 10% when compared to baselines from state-of-the-art. Moreover, our proposed model minimizes the deployment costs and increases the overall throughput. Furthermore, we assess the optimal model’s performance across diverse network conditions, including variable functional split options, link capacity bottlenecks, and channel bandwidth limitations. Our analysis delves into placement decisions, evaluating admittance ratio, radio and link resource utilization, and quantifying the impact on different service types. | 10.1109/TNSM.2024.3476939 |
Daniel Ayepah Mensah, Guolin Sun, Gordon Owusu Boateng, Guisong Liu | Federated Policy Distillation for Digital Twin-Enabled Intelligent Resource Trading in 5G Network Slicing | 2024 | Early Access | Indium phosphide III-V semiconductor materials Resource management Collaboration Adaptation models Games Dynamic scheduling Pricing Heuristic algorithms Data models Deep reinforcement learning digital twin federated policy distillation Radio Access Network (RAN) slicing Resource trading | Resource sharing in radio access networks (RAN) can be conceptualized as a resource trading process between infrastructure providers (InPs) and multiple mobile virtual network operators (MVNO), where InPs lease essential network resources, such as spectrum and infrastructure, to MVNOs. Given the dynamic nature of RANs, deep reinforcement learning (DRL) is a more suitable approach to decision-making and resource optimization that ensures adaptive and efficient resource allocation strategies. In RAN slicing, DRL struggles due to imbalanced data distribution and reliance on high-quality training data. In addition, the trade-off between the global solution and individual agent goals can lead to oscillatory behavior, preventing convergence to an optimal solution. Therefore, we propose a collaborative intelligent resource trading framework with a graph-based digital twin (DT) for multiple InPs and MVNOs based on Federated DRL. First, we present a customized mutual policy distillation scheme for resource trading, where complex MVNO teacher policies are distilled into InP student models and vice versa. This mutual distillation encourages collaboration to achieve personalized resource trading decisions that reach the optimal local and global solution. Second, the DT uses a graph-based model to capture the dynamic interactions between InPs and MVNOs to improve resource-trade decisions. DT can accurately predict resource prices and demand from MVNO to provide high-quality training data. In addition, DT identifies the underlying patterns and trends through advanced analytics, enabling proactive resource allocation and pricing strategies. The simulation results and analysis confirm the effectiveness and robustness of the proposed framework to an unbalanced data distribution. | 10.1109/TNSM.2024.3476480 |
Yonghan Wu, Jin Li, Min Zhang, Bing Ye, Xiongyan Tang | A Comprehensive and Efficient Topology Representation in Routing Computation for Large-Scale Transmission Networks | 2024 | Early Access | Routing Network topology Topology Quality of service Heuristic algorithms Delays Computational modeling Computational efficiency Bandwidth Satellites Large-scale transmission networks quality of service network topology multi-factor assessment routing computation | Large-scale transmission network (LSTN) puts forward high requirements to 6G in quality of service (QoS). In the LSTN, bounded and low delay, low packet loss rates, and controllable bandwidth are required to provide guaranteed QoS, involving techniques from the network layer and physical layer. In those techniques, routing computation is one of the fundamental problems to ensure high QoS, especially for bounded and low delay. Routing computation in LSTN researches include the routing recovery based on searching and pruning strategies, individual-component routing and fiber connections, and multi-point relaying (MRP)-based topology and routing selection. However, these schemes reduce the routing time only through simple topological pruning or linear constraints, which is unsuitable for efficient routing in LSTN with increasing scales and dynamics. In this paper, an efficient and comprehensive {routing computation algorithm namely multi-factor assessment and compression for network topologies (MC) is proposed. Multiple parameters from nodes and links in networks are jointly assessed, and topology compression for network topologies is executed based on MC to accelerate routing computation. Simulation results show that MC brings space complexity but reduces time cost of routing computation obviously. In larger network topologies, compared with classic and advanced routing algorithms, the higher performance improvement about routing computation time, the number of transmitted service, average throughput of single routing, and packet loss rates of MC-based routing algorithms are realized, which has potentials to meet the high QoS requirements in LSTN. | 10.1109/TNSM.2024.3476138 |
Feng Zhou, Kefeng Guo, Gaojian Huang, Xingwang Li, Evangelos K. Markakis, Ilias Politis, Muhammad Asif | Performance Evaluations for RIS-Aided Satellite Aerial Terrestrial Integrated Networks With Link Selection Scheme and Practical Limitations | 2024 | Early Access | Relays Interference System performance Satellites Satellite broadcasting Wireless communication Reviews Rayleigh channels Autonomous aerial vehicles Physics Satellite aerial terrestrial integrated networks reconfigurable intelligent surface (RIS) practical limitations system performance | This paper researches the system evaluations of the reconfigurable intelligent surface (RIS)-assisted satellite aerial terrestrial integrated systems. To ensure the stability of the regarded network, a link selection scheme is presented to get the balance between the system performance and the system efficiency. Besides, in order to build a practical environment of the transmission networks, the imperfect hardware, channel estimation errors and co-channel interference are both considered in the networks. Relied on the above considerations, the detailed analysis for the outage behaviors is shown along with the asymptotic outage probability in high signal-to-noise ratio scenarios. Moreover, the diversity order and coding gain are also provided to give fast methods to confirm the system evaluation. Finally, some re-presentative simulations are provided to confirm the efficiency and advantage of analytical results and the proposed link selection scheme. | 10.1109/TNSM.2024.3476146 |
Kai Zhao, Xiaowei Chuo, Fangchao Yu, Bo Zeng, Zhi Pang, Lina Wang | SplitAUM: Auxiliary Model-Based Label Inference Attack Against Split Learning | 2024 | Early Access | Servers Data models Computational modeling Data privacy Training Protocols Privacy Protection Information security Image reconstruction Deep learning label inference attack split learning federated learning clustering | Split learning has emerged as a practical and efficient privacy-preserving distributed machine learning paradigm. Understanding the privacy risks of split learning is critical for its application in privacy-sensitive scenarios. However, previous attacks against split learning generally depended on unduly strong assumptions or non-standard settings advantageous to the attacker. This paper proposes a novel auxiliary model-based label inference attack framework against learning, named SplitAUM. SplitAUM first builds an auxiliary model on the client side using intermediate representations of the cut layer and a small number of dummy labels. Then, the learning regularization objective is carefully designed to train the auxiliary model and transfer the knowledge of the server model to the client. Finally, SplitAUM uses the auxiliary model output on local data to infer the server’s privacy label. In addition, to further improve the attack effect, we use semi-supervised clustering to initialize the dummy labels of the auxiliary model. Since SplitAUM relies only on auxiliary models, it is highly scalable. We conduct extensive experiments on three different categories of datasets, comparing four typical attacks. Experimental results demonstrate that SplitAUM can effectively infer privacy labels and outperform existing attack frameworks in challenging yet practical scenarios. We hope our work paves the way for future analyses of the security of split learning. | 10.1109/TNSM.2024.3474717 |
Xin Tang, Luchao Jin, Jing Bai, Linjie Shi, Yudan Zhu, Ting Cui | Key Transferring-Based Secure Deduplication for Cloud Storage With Resistance Against Brute-Force Attacks | 2024 | Early Access | Cryptography Encryption Indexes Brute force attacks Cloud computing Servers Data privacy Resists Side-channel attacks Privacy Brute-force attacks cross-user deduplication cloud storage privacy key transferring | Convergent encryption is an effective technique to achieve cross-user deduplication of encrypted data in cloud storage. However, it is vulnerable to brute-force attacks for data with low min-entropy. Moreover, once the content of the target data is successfully constructed through the aforementioned attacks, the corresponding index can also be obtained, leading to the risk of violating privacy during the process of data downloading. To address these challenges, we propose a key transferring-based secure deduplication (KTSD) scheme for cloud storage with support for ownership verification, which significantly improves the security against brute-force attacks during the ciphertext deduplication and downloading. Specifically, we introduce a randomly generated key in data encryption and downloading index generation to prevent the results from being inferred. And define a deduplication request index and a key request index by using the bloom filter to achieve brute-force attack resistant key transferring. An RSA-based ownership verification scheme is designed for the downloading process to effectively prevent privacy leakage. Finally, we prove the security of our schemes by security analysis and perform the performance evaluation experiments, the results of which show that compared to the state-of-the art, the cloud storage overhead can be reduced by 6.01% to 20.49% under KTSD. | 10.1109/TNSM.2024.3474852 |
Hai Zhu, Xingsi Xue, Mengmeng Xu, Byung-Gyu Kim, Xiaohong Lyu, Shalli Rani | Zero-Trust Blockchain-Enabled Secure Next-Generation Healthcare Communication Network | 2024 | Early Access | Security Medical services Next generation networking Access control Codes Authentication Blockchains Trajectory Logic gates Telemedicine Zero-trust security healthcare communication network next-generation networking blockchain | Conventional security architectures and models are considered single-network architecture solutions, which assume that devices authenticated within the network are implicitly trusted. However, such an approach is unsuitable for next-generation networks (NGNs). Zero-trust security was introduced to overcome these challenges using context-aware, dynamic, and intelligent authentication schemes. This paper proposes a novel zero-trust blockchain-enabled framework for secure next-generation healthcare communication network (HCN). The proposed framework integrates zero-trust and blockchain to provide a decentralized, secure, and intelligent solution for healthcare communication in NGNs. The system model comprises three components: HCN user identity modeling, blockchain and risk assessment-based access control, and dynamic trust gateway. The user identity modeling component utilizes attribute-based user behavior trajectory features, while the access control component leverages smart contracts-based risk assessment. The dynamic trust gateway component employs a consensus mechanism to achieve dynamic gateway switching and enhance network resilience. Simulation results demonstrate that the proposed framework achieves 31% lower calculation delays, 3% higher trust values, and 3% better attack detection accuracy compared to best baseline methods. It also exhibits a 2% improvement in access control granularity and maintains 95% network throughput under various failure scenarios. | 10.1109/TNSM.2024.3473016 |
Malte Tashiro, Emile Aben, Romain Fontugne | Metis: Selecting Diverse Atlas Vantage Points | 2024 | Early Access | Probes Windows IP networks Topology Internet Particle measurements Data mining Atmospheric measurements Buildings Writing Bias Internet measurements Open Access RIPE Atlas vantage point selection | The popularity of the RIPE Atlas measurement platform comes primarily from its openness and unprecedented scale. The platform provides users with over ten thousand vantage points, called probes, and is usually considered as giving a reasonably faithful view of the Internet. A good use of Atlas, however, requires a clear understanding of its limitations and bias. In this work we highlight the influence of probe locations on Atlas measurements and advocate the importance of selecting a diverse set of probes for fair measurements. We propose Metis, a data-driven probe selection method, that picks a diverse set of probes based on topological properties (e.g., round-trip time or AS-path length). Using real experiments we show that, compared to Atlas’ default probe selection, Metis’ probe selections collect more comprehensive measurement results in terms of geographical, topological, RIR, and industry-type coverage. Metis triples the number of probes from the underrepresented AFRINIC and LACNIC regions, and improves geographical diversity by increasing the number of unique countries included in the probe set by up to 59%. In addition, we extend Metis to identify locations on the Internet where new probes would be the most beneficial for improving Atlas’ footprint. Finally, we present a website where we publish periodically updated results and provide easy integration of Metis’ selections with Atlas. | 10.1109/TNSM.2024.3470989 |
Yanfei Wu, Liang Liang, Yunjian Jia, Wanli Wen | HFL-TranWGAN: Knowledge-Driven Cross-Domain Collaborative Anomaly Detection for End-to-End Network Slicing | 2024 | Early Access | Anomaly detection Network slicing Collaboration Hidden Markov models Knowledge engineering Distributed databases Training Federated learning 3GPP Support vector machines End-to-end network slicing collaborative anomaly detection knowledge-driven generating adversarial networks hierarchical federated learning | Network slicing is a key technology that can provide service assurance for the heterogeneous application scenarios emerging in the next-generation networks. However, the heterogeneity and complexity of virtualized end-to-end network slicing environments pose challenges for network security operations and management. In this paper, we propose a knowledge-driven cross-domain collaborative anomaly detection scheme for end-to-end network slicing, namely HFL-TranWGAN. Specifically, we first design a hierarchical management framework that performs three-tier hierarchical intelligent management of end-to-end network slices, while introducing a knowledge plane to assist the management plane in making intelligent decisions. Then, we develop a knowledge-driven sub-slice anomaly detection model, the conditional TranWGAN model, in which an encoder, a generator, and multiple discriminators perform adversarial learning simultaneously. Finally, taking the sub-slice anomaly detection model as the basic training model, we utilize hierarchical federated learning to achieve inter-slice and intra-slice collaborative anomaly detection. We calculate the anomaly scores through the discrimination error and reconstruction error to obtain the anomaly detection results. Simulation results on two real-world datasets show that the proposed HFL-TranWGAN scheme performs better in anomaly detection performance such as F1 score and precision compared to the benchmark methods. Specifically, HFL-TranWGAN improved precision by up to 8.53% and F1 score by up to 1.88% compared to benchmarks.” | 10.1109/TNSM.2024.3471808 |
Ailing Xiao, Sheng Wu, Yongkang Ou, Ning Chen, Chunxiao Jiang, Wei Zhang | QoE-Fairness-Aware Bandwidth Allocation Design for MEC-Assisted ABR Video Transmission | 2024 | Early Access | Quality of experience Bit rate Streaming media Wireless communication Resource management Channel allocation Optimization Adaptation models Switches Servers Adaptive Bitrate Video Streaming Quality of Experience (QoE) QoE Fairness Bandwidth Allocation | Adaptive bitrate (ABR) streaming provides an effective way to improve the Quality of Experience (QoE) of video users and is now the de facto standard for video delivery. Meanwhile, mobile edge computing (MEC) has been applied to assist ABR streaming, improving the performance of mobile networks and enabling efficient video delivery. However, smooth ABR streaming relies on the bidirectional adaptation between bitrate selection and bandwidth allocation, as they operate on distinct timescales and have different optimization goals. Moreover, since the constrained wireless resources available within a cell are shared by multiple users, their QoE should be optimized not only jointly but fairly. To this end, we propose a QoE-fairness-aware bandwidth allocation (QFA-BA) method for MEC-assisted ABR video transmission. With a novel perspective on buffer occupancy modeling, the relationship between bitrate selection and bandwidth allocation is studied. An enhanced QoE evaluation model is then proposed to correlate bitrate selection with bandwidth allocation and facilitate QFA-BA. Finally, a soft actor-critic (SAC) framework improving both the QoE and QoE-fairness is presented for QFA-BA. Compared with the state-of-the-art methods, our QFA-BA can perceive fine-grained buffer occupancy and stabilize it near a preset value with relatively more and larger bitrate switchings, exhibiting smoother convergence, better QoE (50.29%) and QoE fairness (54.81%). | 10.1109/TNSM.2024.3471632 |
Jatinder Kumar, Deepika Saxena, Jitendra Kumar, Ashutosh Kumar Singh, Athanasios V. Vasilakos | An Adaptive Evolutionary Neural Network Model for Load Management in Smart Grid Environment | 2024 | Early Access | Neural networks Smart meters Load modeling Forecasting Biological neural networks Accuracy Load forecasting Predictive models Smart grids Power demand Power consumption Load forecast Feed-forward neural network Differential evolutionary optimization Demand response | To empower the management of smart meters’ demand load within a smart grid environment, this paper presents a Feed-forward Neural Network with ADaptive Evolutionary Learning Approach (ADELA). In this model, the load forecasting information is propagated via neurons of input and multiple hidden layers and the final estimated output is achieved with the help of the sigmoid activation function. An improved evolutionary algorithm is proposed for training and adjusting the interconnecting weights among the layers of the intended neural network. This model is capable of addressing the critical challenges of high volatility, uncertainty, missing smart meters data, and sudden upsurge and plunge in electricity demand. The proposed algorithm is able to learn the best suitable evolutionary operators from a given pool of operators and the probabilities associated with them. The proposed load forecasting approach is simulated over three real-world smart meter datasets, including the Australian Smart Grid Smart City project, the Irish Commission for Energy Regulation, and UMass Smart. The performance evaluation and comparison of the proposed approach with the existing state-of-the-art approaches revealed a relative improvement of up to 46.93%, 5.05%, and 2.20% in forecast accuracy over the Smart Grid Smart City, UMass Smart and the Irish Commission for Energy Regulation datasets, respectively. | 10.1109/TNSM.2024.3470853 |
Xiao He, Shanchen Pang, Haiyuan Gui, Kuijie Zhang, Nuanlai Wang, Shihang Yu | Online Offloading and Mobility Awareness of DAG Tasks for Vehicle Edge Computing | 2024 | Early Access | Vehicle dynamics Heuristic algorithms Servers Resource management Real-time systems Dynamic scheduling Job shop scheduling Energy consumption Industries Delays Internet of Vehicle edge computing directed acyclic graph Lyapunov optimization deep reinforcement learning | Achieving real-time processing of tasks has become a crucial objective in the Internet of Vehicles (IoV) field. During the online generation of tasks in IoV systems, many dependency tasks arrive randomly within continuous time frames, and it is impossible to predict the number of arriving tasks and the dependencies between sub-tasks. Offloading dependent tasks, which are quantity-intensive and have complex dependencies, to appropriate vehicle edge servers (VESs) for online processing of large-scale tasks remains a challenge. Firstly, we innovatively propose a VES task parallel processing framework incorporating a multi-level feedback queue to enhance the cross-slot parallel processing capabilities of the IoV system. Secondly, to reduce the complexity of problem-solving, we employ the Lyapunov optimization method to decouple the online task offloading control problem into single-stage mixed-integer nonlinear programming problem. Finally, we design an online task decision-making algorithm based on multi-agent reinforcement learning to achieve real-time task offloading decisions in complex dynamic IoV environments. To validate our algorithm’s superiority in dynamic IoV systems, we compare it with other online task offloading decision-making algorithms. Simulation results show that ours significantly reduces the all-task processing latency of IoV system by 15% compared to the comparison algorithms, and the task average latency time is reduced by 14%. | 10.1109/TNSM.2024.3470777 |
Saksham Katwal, Nidhi Sharma, Krishan Kumar | A Deep Learning Approach for Throughput Enhanced Clustering and Spectrally Efficient Resource Allocation in Ultra-Dense Networks | 2024 | Early Access | Resource management Interference Throughput Clustering algorithms Quality of service Generative adversarial networks Elbow Ultra-dense networks Computational complexity Systems architecture Ultra-dense networks throughput clustering distributed deep neural network resource allocation | The primary obstacle for the wireless industry is meeting the growing demand for cellular services, which necessitates the deployment of numerous femto base stations (FBSs) in ultra-dense networks. Effective resource distribution among densely and randomly distributed FBSs in ultra-dense is difficult, mainly because of intensified interference problems. The K-means clustering is improved by employing the Davies Bouldin index, which separates the clusters to prevent overlapping and mitigate interference. The elbow approach is utilized to determine the optimal number of clusters. Afterward, attention is directed toward addressing efficient resource allocation through a distributive methodology. The proposed approach makes use of a replay buffer-based multi-agent framework and uses the generative adversarial networks deep distributional Q-network (GAN-DDQN) to efficiently model and learn state-action value distributions for intelligent resource allocation. To further improve control over the training error, the distributions are estimated by approximating a whole quantile function. The numerical results validate the effectiveness of both the proposed clustering method and the GAN-DDQN-based resource allocation scheme in optimizing throughput, fairness, energy efficiency, and spectrum efficiency, all while maintaining the QoS for all users. | 10.1109/TNSM.2024.3470235 |
Yingjun Ye, Ke Ruan, Weihao Yu | Evaluation and Optimization of Backbone Network Reliability Problems Using Decision Diagram Methods | 2024 | Early Access | Reliability IP networks Optical fiber networks Reliability engineering Neural networks Business Optimization Computer architecture Computational modeling Evaluation models Multilayer network backbone network reliability multi-state network decision diagram | The structure of the backbone network is complex, and the characteristics of multi-layer architecture and non-independent IP layer links lead to a lack of suitable reliability assessment models and methods to evaluate the reliability of the backbone network. To this end, this paper uses decision diagram methods to model the dependency relationship between IP layer links and optical layer components, relaxing the assumption of independent network link failures. The decision diagram can logically combine features, and while retaining the original connectivity reliability and capacity reliability solution methods, it supplements the dependency relationship and inter-layer relationship of the network with subgraph merging operations. In addition, the issue of capacity reliability or business reliability for multi-terminals and all-terminals has not yet yielded a suitable solution. This paper uses the directed acyclic graph feature of the decision diagram to design a state expansion algorithm, which can be used to solve the multi-terminal capacity availability of multi-state networks. Finally, based on the easy-to-parallel characteristics of the decision diagram, parallel methods are designed to parallelize the entire process of network reliability evaluation, which can alleviate the problem of state space explosion. | 10.1109/TNSM.2024.3470076 |
Na Lin, Xiao Han, Ammar Hawbani, Yunhe Sun, Yunchong Guan, Liang Zhao | Deep Reinforcement Learning Based Dual-Timescale Service Caching and Computation Offloading for Multi-UAV Assisted MEC Systems | 2024 | Early Access | Autonomous aerial vehicles Servers Resource management Optimization Energy consumption Trajectory Delays Sun Internet of Things Time-frequency analysis Mobile edge computing(MEC) unmanned aerial vehicles (UAVs) service caching task offloading resource allocation Deep Reinforcement Learning | The emergence of unmanned aerial vehicles (UAVs) ushers in a new era for mobile edge computing (MEC), significantly expanding its range of service and potential applications. Due to the limited storage capacity and energy budget of UAVs, it is crucial to determine a reasonable service caching and task offloading strategy. Service caching means that task-related programs and the associated databases are cached on edge servers. In this paper, we consider the time latency and energy consumption caused by frequent changes to the service caching, aiming to jointly optimize the computational offloading, resource allocation, and service caching in multi-UAV assisted MEC systems at different time scales. The objective of this optimization is to reduce the overall system delay while staying within the energy limitations of both the UAVs and ground devices. An improved service caching policy (SCP) is proposed, which is based on task popularity and utilizes the greedy dual size frequency (GDSF) algorithm. The SCP is combined with the twin delayed deep deterministic policy gradient (TD3) algorithm to propose an innovative dual timescale TD3 (DTTD3) algorithm. The numerical outcomes obtained from a substantial number of simulation experiments demonstrate that DTTD3 outperforms existing benchmark methods in terms of convergence and parameter optimization. | 10.1109/TNSM.2024.3468312 |
Xinping Rao, Le Qin, Yugen Yi, Jin Liu, Gang Lei, Yuanlong Cao | A Novel Adaptive Device-Free Passive Indoor Fingerprinting Localization Under Dynamic Environment | 2024 | Early Access | Location awareness Fingerprint recognition Accuracy Feature extraction Training Adaptation models Databases Wireless fidelity Bayes methods Support vector machines Channel State Information (CSI) Convolutional Neural Network (CNN) Domain Adaptation Indoor Device-free Passive Localization | In recent years, indoor localization has attracted a lot of interest and has become one of the key topics of Internet of Things (IoT) research, presenting a wide range of application scenarios. With the advantages of ubiquitous universal Wi-Fi platforms and the "unconscious collaborative sensing" in the monitored target, Channel State Information (CSI)-based device-free passive indoor fingerprinting localization has become a popular research topic. However, most existing studies have encountered the difficult issues of high deployment labor costs and degradation of localization accuracy due to fingerprint variations in real-world dynamic environments. In this paper, we propose BSWCLoc, a device-free passive fingerprint localization scheme based on the beyond-sharing-weights approach. BSWCLoc uses the calibrated CSI phases, which are more sensitive to the target location, as localization features and performs feature processing from a two-dimensional perspective to ultimately obtain rich fingerprint information. This allows BSWLoc to achieve satisfactory accuracy with only one communication link, significantly reducing deployment consumption. In addition, a beyond-sharing-weights (BSW) method for domain adaptation is developed in BSWCLoc to address the problem of changing CSI in dynamic environments, which results in reduced localization performance. The BSW method proposes a dual-flow structure, where one flow runs in the source domain and the other in the target domain, with correlated but not shared weights in the adaptation layer. BSWCLoc greatly exceeds the state-of-the-art in terms of positioning accuracy and robustness, according to an extensive study in the dynamic indoor environment over 6 days. | 10.1109/TNSM.2024.3469374 |
Renato S. Silva, Luís Felipe M. de Moraes | GonoGo -Assessing the Confidence Level of Distribute Intrusion Detection Systems Alarms Based on BGP | 2024 | Early Access | Border Gateway Protocol Internet Routing Security Intrusion detection Machine learning Data models Routing protocols IP networks Grippers DIDS Machine Learning BGP Distributed Intrusion Detection System | Although Border Gateway Protocol – BGP is increasingly becoming a multi-purpose protocol, it suffers from security issues regarding bogus announcements for malicious goals. Some of these security breaches are particularly critical for distributed intrusion detection systems that use BGP as their underlay network for interchanging alarms. In this case, assessing the confidence level of these BGP messages helps to prevent internal attacks. Most of the proposals addressing the confidence level of BGP messages rely on complex and time-consuming mechanisms that can also be a potential target for intelligent attacks. In this paper, we propose Gonogo as an out-of-band system based on machine learning to infer the confidence level of the intrusion alarms using just the mandatory header of each BGP message that transports them. Tests using a synthetic data set reflecting the indirect effects of a widespread worm attack over the BGP network show promising results, considering well-known performance metrics, such as recall, accuracy, receiver operating characteristics (ROC), and f1-score. | 10.1109/TNSM.2024.3468907 |