Last updated: 2024-04-19 03:01 UTC
All documents
Number of pages: 112
Author(s) | Title | Year | Issue | Keywords | ||
---|---|---|---|---|---|---|
Chen Chen, Jiabao Si, Huan Li, Wei Han, Neeraj Kumar, Stefano Berretti, Shaohua Wan | A High Stability Clustering Scheme for the Internet of Vehicles | 2024 | Early Access | Magnetic heads Clustering algorithms Heuristic algorithms Measurement Delays Stability criteria Optimization the Internet of Vehicles Cluster Head Selection Scheme Machine Learning | In existing research on cluster head selection schemes in the Internet of Vehicles (IoV), designing a stable cluster structure poses a significant challenge. Choosing a centrally-located cluster head that can respond rapidly is crucial for meeting various requirements. To address the aforementioned challenges, this paper introduces a machine learning-based IoV cluster head selection scheme (HSCS). We introduce a new metric termed N-cycle Average Virtual Cluster Delay (XTn) for appropriate cluster head selection. To accommodate the high dynamism of vehicles, a machine learning model is integrated to predict cluster head selection metrics across different periods, and a set of cluster head selection guidelines is formulated. Experimental results demonstrate that our proposed HSCS ensures a relatively low average intra-cluster delay while maintaining a longer cluster head retention time, and it exhibits commendable robustness. | 10.1109/TNSM.2024.3390117 |
Marco Martalò, Giovanni Pettorru, Luigi Atzori | A Cross-Layer Survey on Secure and Low-Latency Communications in Next-Generation IoT | 2024 | Early Access | Internet of Things Surveys Security Protocols Industrial Internet of Things 6G mobile communication Next generation networking 6G Internet of Things (IoT) Industrial IoT (IIoT) privacy security low latency QUIC | The last years have been characterized by strong market exploitation of the Internet of Things (IoT) technologies in different application domains, such as Industry 4.0, smart cities, and eHealth. All the relevant solutions should properly address the security issues to ensure that sensor data and actuators are not under the control of malicious entities. Additionally, many applications should at the same time provide low-latency communications, as in the case for instance of remote control of industrial robots. Low latency and security are two of the most important challenges to be addressed for the successful deployment of IoT applications. These issues have been analyzed by several scientific papers and surveys that appeared in the last decade. However, few of them consider the two challenges jointly. Moreover, the security aspects are primarily investigated only in specific application domains or protocol levels and the latency issues are typically investigated only at low layers (e.g., physical, access). This paper addresses this shortcoming and provides a systematic review of state-of-the-art solutions for providing fast and secure IoT communications. Although the two requirements may appear to be in contrast to each other, we investigate possible integrated solutions that minimize device connection and service provisioning. We follow an approach where the proposals are reviewed by grouping them based on the reference architectural layer, i.e., access, network, and application layers. We also review the works that propose promising solutions that rely on the exploitation of the QUIC protocol at the higher levels of the protocol stack. | 10.1109/TNSM.2024.3390543 |
Xiaojuan Wang, Yiqing Luo, Mingshu He, Xinlei Wang | A Semantic Detection Method for Network Flows With Global and Generalized Nature | 2024 | Early Access | Feature extraction Semantics Correlation Threat assessment Analytical models Transformers Network topology threat detection Semantic analysis topological relationship heterogeneous graph graph convolutional network transformer encoder | Network threat detection and identification are essential tasks in the defense of cyberspace. However, current network threat detection methods have limitations such as narrow feature extraction, targeted feature effects, and limited generalization performance. Therefore, there is a need for a more comprehensive understanding and description of network behavior. As a result, we propose a global and generalized method for semantic detection of network flow to enhance the definition of flow data and representation of network behavior. To improve the problem of narrow feature range in existing methods, this paper designs three feature embedding methods that represent global, temporal, and local semantic correlations from both temporal and spatial dimensions: global embedding, position embedding, and learning embedding. In order to overcome the problem of existing methods only targeting specific behaviours, this article focuses on constructing global correlation features to replace the detection mode of building an inherent feature set. By utilizing text analysis features, we extract global embedding features containing network flow relationship information by constructing a topology heterogeneous graph between flows and bytes. This is combined with position embedding and learning embedding to complete data detection and behavior classification through input into the transformer encoder. We validated the effectiveness of our method in three scenarios: the Internet, the Internet of Things, and encryption. The final experimental results demonstrated that our proposed method outperformed existing advanced models. Furthermore, after incorporating global embedding representing international correlation relationships, the model’s classification accuracy was further improved. | 10.1109/TNSM.2024.3390180 |
Zhihuang Ma, Tingyu Li, Zichen Xu, Nelson L. S. da Fonseca, Zuqing Zhu | SFCache: Hybrid NF Synthesization in Runtime With Rule-Caching in Programmable Switches | 2024 | Early Access | Servers Noise measurement Pipelines Runtime Switches Pattern matching Middleboxes Network function virtualization Service function chain Programmable data plane switch Rule caching | Data plane programmable (PDP) switches are becoming increasingly popular for network function virtualization (NFV), for their programmability and high packet processing performance. However, the inherent limitations of PDP switches, such as limited memory space, make it challenging to implement certain types of network functions (NFs) (i.e., the stateful ones) on them. This paper proposes SFCache, which combines PDP switches and commodity servers to achieve self-adaptive SFC deployment. SFCache aims to exploit the high packet processing performance of PDP switches while supporting the flexible deployment of a wide range of SFCs (including the stateful ones) with servers. Specifically, SFCache can dynamically improve the packet processing performance of the SFCs that were deployed on servers by selectively caching SFC-level packet processing rules on PDP switches. We design a few key components to facilitate SFCache, including an NF-destructed P4 pipeline that allows customizing packet processing rules in a match-rewrite pattern, a runtime NF synthesis method that can transform a set of NF-level match-rewrite rules into an equivalent SFC-level rule, and a count-min selection strategy to choose the best synthesized rule for being cached in PDP switch pipeline. We prototype SFCache with a PDP switch based on Tofino ASIC and a server, and demonstrate the effectiveness of our proposal experimentally. | 10.1109/TNSM.2024.3390140 |
Mahzabeen Emu, Salimur Choudhury, Kai Salomaa | Stochastic Resource Optimization for Metaverse Data Marketplace By Leveraging Quantum Neural Networks | 2024 | Early Access | Metaverse Resource management Computational modeling Uncertainty Servers Cloud computing Solid modeling Stochastic Optimization Metaverse Quantum Neural Networks Uncertainty Resource Management | Metaverse can unleash the potentials of Internet of Sense (IoS) communication by intertwining objects and environment between physical world and parallel virtual world. In order to digitally experience smell or taste and navigate effortlessly in virtual reality, optimal resource allocation to strengthen sensing data based infrastructure system is a critical research challenge. The Metaverse Infrastructure Service Providers (MISPs) tap into data marketplace and subscribe to resources in advance for fulfilling the needs of data consumers and users. The demand of the data based services being uncertain, non-optimal subscription schemes may lead to unwanted resource wastage or shortage. Thus, we propose a Stochastic Integer Programming (SIP) model with two phase reservation and on-demand plans for optimal resource allocation in data marketplace. Further along this line, we strive to predict the demand by leveraging Quantum Neural Networks (QNN) that is able to learn with fewer historical data in comparison to classical machine/deep learning paradigms. Extensive simulation results justify that QNN as a supporting model can significantly reduce the computational complexities of SIP formulation. This research can contribute to reduce Metaverse resource fabrication costs, upgrade the profit margin for MISPs by increasing data based service sales revenue, provide real-time resource management decisions, and overall make real impacts in the virtual world. | 10.1109/TNSM.2024.3389048 |
Everson S. Borges, Magnos Martinello, Vitor B. Bonella, Abraão J. dos Santos, Roberta L. Gomes, Cristina K. Dominicini, Rafael S. Guimarães, Gabriel T. Menegueti, Marinho Barcellos, Marco Ruffini | PoT-PolKA: Let the Edge Control the Proof-of-Transit in Path-Aware Networks | 2024 | Early Access | Routing Cryptography Proposals Polynomials Protection Metadata Switches Path-Aware Path Verification Proof-of-transit IOAM In-networking Programming | This paper presents a scalable and efficient solution for secure network design that involves the selection and verification of network paths. The proposal addresses the challenges related to compliance policies by introducing a Proof-of-Transit (PoT) feasible implementation for path-aware programmable networks. Our approach relies on i) a source routing mechanism based on a fixed routeID representing a unique identifier per path, which serves as a key for PoT lookup tables; ii) the "in situ" that allows to collect telemetry information in the packet while the packet traverses a path. The former enables path selection with policy at the edge, while the later allows to perform path verification without extra probe-traffic. A P4 programmable language prototype demonstrates the effectiveness of this approach to protect against deviation attacks with low overhead. The results show its scalability considering the protocol overhead as the path length increases; a significant reduction in network’s forwarding state for fat-tree topologies depending on the workload per path (flows/path). Finally, experimental results show a RTT comparison evaluation, the impact of PoT computation, protection to path deviation and seamless path migration keeping flow protection. | 10.1109/TNSM.2024.3389457 |
Jingjing Yang, Yuchun Guo, Yishuai Chen, Yongxiang Zhao | MicroNet: Operation Aware Root Cause Identification of Microservice System Anomalies | 2024 | Early Access | Microservice architectures Data aggregation Interference Time factors System performance Receivers Logic gates Microservice architecture operation analysis root cause location | Microservice architecture has been widely adopted in large-scale applications. However, it also brings new challenges to ensuring reliable performance and maintenance due to the huge volume of data and complex dependencies of microservices. Existing approaches still suffer from the over-aggregation of data, interference from anomaly propagation, and ignoration of component differences. To solve these issues, this paper builds a root cause diagnosis framework at the operation granularity, named as MicroNet. Since operations are subfunctions of microservices, recorded as invocation purposes, we propose the operation-centric perspective, to realize fine-grained data aggregation and operation-level anomaly backtracking. We decompose the diagnosis task into four phases: dependency graph construction, anomaly detection, anomaly evaluation, and culprit location. To construct the invocation dependency accurately, we propose the concept of meta call, defined as the triple (caller, operation, callee), the smallest unit that can be aggregated. Based on the dependency graph, we quantify the operation’s abnormality by analyzing the operation execution process, to backtrack the propagated anomalies. Then, we customize a personalized PageRank algorithm to identify the root cause in which invocation latency and different invocation relationships are considered simultaneously. Our experimental evaluation on an open dataset shows that MicroNet can effectively locate root causes with 90% mean average precision, outperforming state-of-the-art methods. | 10.1109/TNSM.2024.3387552 |
Penghao Sun, Julong Lan, Yuxiang Hu, Zehua Guo, Chong Wu, Jiangxing Wu | Realizing the Carbon-Aware Service Provision in ICT System | 2024 | Early Access | Carbon dioxide Electricity Data centers Cooling Servers Scheduling Processor scheduling Carbon Neutralization Cloud-edge Collaboration Software-Defined Networking Deep Reinforcement Learning Traffic Scheduling | The ever-growing carbon emission of information infrastructure accounts for a significant proportion of the global carbon emissions. Existing studies reduce carbon consumption mainly by improving power efficiency on specific facilities or energy source structures. However, these methods do not jointly consider the impact of computation and network resource distribution on carbon emission. In this paper, we propose a data-driven scheme named EcoNet using reinforcement learning to reduce carbon emissions by jointly scheduling computation and network resources. We dynamically monitor the status of the computation and network facilities using cloud-edge collaboration and software-defined networking. Based on the collected status information, we formulate the resource scheduling problem as an optimization problem, which comprehensively considers the carbon emission, electricity price, and quality of service. The problem has high computation complexity, and we solve the problem with the proposed EcoNet to achieve efficient scheduling and near-optimal performance based on the collected network status information. The evaluation results show that EcoNet can maintain good Quality of Service and save at least 17% of the overall cost considering the electricity bills and carbon emissions. | 10.1109/TNSM.2024.3385484 |
Lamees M. Al Qassem, Thanos Stouraitis, Ernesto Damiani, Ibrahim M. Elfadel | Containerized Microservices: A Survey of Resource Management Frameworks | 2024 | Early Access | Microservice architectures Resource management Cloud computing Containers Service level agreements Computer architecture Surveys Microservices Containers Resource Management Container Orchestration Machine Learning Workload Forecasting Reactive Allocation Predictive Allocation | The growing adoption of microservice architectures (MSAs) has led to major research and development efforts to address their challenges and improve their performance, reliability, and robustness. Important aspects of MSA that are not sufficiently covered in the open literature include efficient cloud resource allocation and optimal power management. Other aspects of MSA remain widely scattered in the literature, including cost analysis, service level agreements (SLAs), and demand-driven scaling. In this article, we examine recent cloud frameworks for containerized microservices with a focus on efficient resource utilization using auto-scaling. We classify these frameworks on the basis of their resource allocation models and underlying hardware resources. We highlight current MSA trends and identify workload-driven resource sharing within microservice meshes and SLA streamlining as two key areas for future microservice research. | 10.1109/TNSM.2024.3388633 |
Song Zhang, Wenxin Li, Lide Suo, Yuan Liu, Yulong Li, Jien Kato, Keqiu Li | BRT: Buffer Management for RDMA/TCP Mix-Flows in Datacenter Networks | 2024 | Early Access | Production Kernel Throughput Bandwidth Absorption Switches Resource management Datacenter Network Buffer Management Coexistence of RDMA and TCP | The coexistence of RDMA and TCP is prevalent in the datacenter. Despite the sound isolation at the end hosts, they share the same switches in the network. Their different networking behaviors (E.g., in hardware demand and transport protocols) lead to huge differentiated buffer demand for switches. However, existing buffer management schemes ignore these dissimilarities and simply treat such RDMA/TCP mix-flows as the typical multi-class traffic, resulting in inferior isolation and degrading networking performances. This paper presents BRT, a first systematic solution for the buffer management of RDMA/TCP mix-flows in the DCN. BRT’s key insight is to allocate buffer with the awareness of traffic’s networking characteristics while minimally impacting the other’s performance. Guided by this insight, it first employs a traffic characteristics-based window to detect whether queues are in the state of persistent long queues. Then, it adjusts the total allocated buffer for each traffic type based on the number of persistent long queues and the normalized dequeue rates to reduce the buffer occupancy of meaningless queuing. Last, it calculates the buffer threshold for RDMA/TCP queues separately and uses a simple yet effective approach to prioritize the absorption of small flows. Our large-scale packet-level evaluations show that BRT can effectively optimize the networking performances for RDMA/TCP mix-flows. For example, compared to current practice, BRT achieves up to 53.5%, 46.7%, and 48.5% lower average FCT for incast flows, RDMA small flows, and TCP small flows, respectively, without sacrificing the overall throughput. | 10.1109/TNSM.2024.3387984 |
Somayeh Kianpisheh, Tarik Taleb | Collaborative Federated Learning for 6G With a Deep Reinforcement Learning Based Controlling Mechanism: A DDoS Attack Detection Scenario | 2024 | Early Access | 6G mobile communication Servers Collaboration Time factors Blockchains Performance evaluation Computational modeling 6G federated learning deep reinforcement learning recurrent neural network DDoS attacks | Offering intelligent services with ultra low latency and high reliability is one of the main objectives of 6G networks. Federated Learning (FL) is a solution to enhance the security of data and the accuracy, in comparison with local training of data in devices. The transmission cost in conventional FL is high. Performing FL using edge infrastructure is a solution. However, edge servers might not be available at every location or the communication with edge resources may prolong the learning process. This paper proposes a collaborative federated learning approach to provide intelligent services through collaboration of various learning levels including central cloud level, edge cloud level, and device level. Computational capabilities of neighbourhood devices are exploited to provide a fast recognition via 6G D2D communication. The learning is modeled as an optimization that performs trade-off between recognition accuracy and response time of recognition for devices. Considering the dynamicity in communication and computation status of the network/devices, a deep reinforcement learning method is proposed to decide about the collaboration of learning levels, and performing the appropriate trade-off. For a DDoS attack detection scenario, the evaluation results show improvement in the gained rewards, the attack detection accuracy, the response time of recognition, and the accumulation of accuracy and response time. | 10.1109/TNSM.2024.3387987 |
Zhuo Hu, Bozhi Liu, Ao Shen, Jie Luo | Blockchain-Based Resource Allocation Mechanism for the Internet of Vehicles: Balancing Efficiency and Security | 2024 | Early Access | Resource management Task analysis Security Optimization Consensus protocol Real-time systems Vehicular ad hoc networks Internet of Vehicles Blockchain Roadside Units Resource Allocation Consensus Mechanism | In the Internet of Vehicles (IoV), the data sharing between the participating vehicles and the roadside units (RSUs) provides intelligent ground transportation with convenience and connectivity. To date, IoV has produced massive heterogeneous traffic data, needing to consume petabytes of computing and spectrum resources. However, due to limited computing resources, vehicles have to offload some tasks to RSUs, raising the issue of resource allocation efficiency in the IoV. This paper focuses on two aspects: the optimization of the resource allocation efficiency on the data processing and transmission, and the security of cloud-based services. Though blockchain technology provides a feasible solution for the safe interaction of IoV information, the consensus process of blockchains still involves much resource consumption. To address the issue, this paper proposes a Blockchain-Based and Resource Allocation Balanced Information Interaction Mechanism (BRAB-IIM) to achieve a balance between efficiency and security. In BRAB-IIM, maximizing the overall interests of the resource trading market and minimizing resource consumption are the two conflicting goals. Meanwhile, a vehicle classification method is proposed, which considers the real-time urgency of interactive information. Additionally, a Hierarchical Delegated Proof of Reputation (hDPoR) consensus mechanism is designed, which combines the reputation value of the vehicle and the priority level of the node vote through prioritizing vehicle demands. Finally, three different optimization algorithms are deployed for the dual-objective optimization of the overall market resource allocation. The experimental results indicate that, compared to the baseline algorithm, the NSGA-II algorithm increases the overall market revenue by 24% under the same energy consumption. Also, it is numerically validated that the BRAB-IIM improves the effectiveness and security of the IoV. | 10.1109/TNSM.2024.3387931 |
Jingjing Hu, Yu Li, Zhaozhao Li, Qinrang Liu, Jiangxing Wu | Unveiling the Strategic Defense Mechanisms in Dynamic Heterogeneous Redundancy Architecture | 2024 | Early Access | Security Computer architecture Runtime environment Games Adaptive control Termination of employment Mathematical models Dynamic Heterogeneous Redundancy Nash Equilibrium game theory offense-defense simulation | The Dynamic Heterogeneous Redundancy (DHR) architecture presents a novel approach to system design and organization, aiming to enhance system security by integrating dynamicity and heterogeneity into its structure. Despite its practical efficacy, the theoretical underpinnings elucidating the mechanisms through which DHR enhances security remain unestablished. This study endeavors to bridge this gap by conducting a theoretical analysis and modeling of the DHR architecture, focusing on its intrinsic characteristics and their implications for system security. Employing static game theory, our research uncovers the unique Nash equilibrium within DHR architecture. Expanding upon this mathematical framework, we delve into how factors such as dynamicity, heterogeneity, and failure rates influence these equilibria, subsequently shaping system security. To validate our findings, we conduct a case study involving a triply redundant DHR system and simulate the offense-defense interplay using the Adam optimization algorithm within boundedly rational static games. Our results affirm the variations in system security under diverse initial conditions and model states, thereby establishing a robust theoretical foundation for DHR architectures and laying the groundwork for their broader comprehension and application across various domains. | 10.1109/TNSM.2024.3387725 |
Zhang Liu, Lianfen Huang, Zhibin Gao, Manman Luo, Seyyedali Hosseinalipour, Huaiyu Dai | GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for DAG Task Scheduling over Dynamic Vehicular Clouds | 2024 | Early Access | Task analysis Dynamic scheduling Topology Vehicle dynamics Processor scheduling Heuristic algorithms Feature extraction Vehicular cloud DAG scheduling deep reinforcement learning graph neural network | Vehicular Clouds (VCs) are modern platforms for processing of computation-intensive tasks over vehicles. Such tasks are often represented as Directed Acyclic Graphs (DAGs) consisting of interdependent vertices/subtasks and directed edges. However, efficient scheduling of DAG tasks over VCs presents significant challenges, mainly due to the dynamic service provisioning of vehicles within VCs and non-Euclidean representation of DAG tasks’ topologies. In this paper, we propose a Graph neural network-Augmented Deep Reinforcement Learning scheme (GA-DRL) for the timely scheduling of DAG tasks over dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process. We then adopt a multi-head Graph ATtention network (GAT) to extract the features of DAG subtasks. Our developed GAT enables a two-way aggregation of the topological information in a DAG task by simultaneously considering predecessors and successors of each subtask. We further introduce non-uniform DAG neighborhood sampling through codifying the scheduling priority of different subtasks, which makes our developed GAT generalizable to completely unseen DAG task topologies. Finally, we augment GAT into a double deep Q-network learning module to conduct subtask-to-vehicle assignment according to the extracted features of subtasks, while considering the dynamics and heterogeneity of the vehicles in VCs. Through simulating various DAG tasks under real-world movement traces of vehicles, we demonstrate that GA-DRL outperforms existing benchmarks in terms of DAG task completion time. | 10.1109/TNSM.2024.3387707 |
Abdullatif Albaseer, Nima Abdi, Mohamed Abdallah, Marwa Qaraqe, Saif Al-Kuwari | FedPot: A Quality-Aware Collaborative and Incentivized Honeypot-Based Detector for Smart Grid Networks | 2024 | Early Access | Security Data models Costs Training Industrial Internet of Things Data integrity Data privacy AMI Honeypot-Based Detector Security Model Machine Learning Incentive Mechanism Collaborative Learning | Honeypot technologies provide an effective defense strategy for the Industrial Internet of Things (IIoT), particularly in enhancing the Advanced Metering Infrastructure’s (AMI) security by bolstering the network intrusion detection system. For this security paradigm to be fully realized, it necessitates the active participation of small-scale power suppliers (SPSs) in implementing honeypots and engaging in collaborative data sharing with traditional power retailers (TPRs). To motivate this interaction, TPRs incentivize data sharing with tangible rewards. However, without access to an SPS’s confidential data, it is daunting for TPRs to validate shared data, thereby risking SPSs’ privacy and increasing sharing costs due to voluminous honeypot logs. These challenges can be resolved by utilizing Federated Learning (FL), a distributed machine learning (ML) technique that allows for model training without data relocation. However, the conventional FL algorithm lacks the requisite functionality for both the security defense model and the rewards system of the AMI network. This work presents two solutions: first, an enhanced and cost-efficient FedAvg algorithm incorporating a novel data quality measure, and second, FedPot, the development of an effective security model with a fair incentives mechanism under an FL architecture. Accordingly, SPSs are limited to sharing the ML model they learn after efficiently measuring their local data quality, whereas TPRs can verify the participants’ uploaded models and fairly compensate each participant for their contributions through rewards. Moreover, the proposed scheme addresses the problem of harmful participants who share subpar models while claiming high-quality data through a two-step verification approach. Simulation results, drawn from realistic mircorgrid network log datasets, demonstrate that the proposed solutions outperform state-of-the-art techniques by enhancing the security model and guaranteeing fair reward distributions. | 10.1109/TNSM.2024.3387710 |
Taha Ben Salah, Marios Avgeris, Aris Leivadeas, Ioannis Lambadaris | VNF Placement and Dynamic NUMA Node Selection Through Core Consolidation at the Edge and Cloud | 2024 | Early Access | Servers Delays Resource management Costs Random access memory Quality of service Throughput Network Function Virtualization Service Function Chaining Edge Computing Cloud Computing Resource Allocation Non-Uniform Memory Access | The recent networking trends driven primarily by the different virtualization technologies, such as Network Function Virtualization (NFV) and Service Function Chaining (SFC) pave the way for next-generation network services. In the 5G and beyond era, such services usually have strict delay requirements and the wider adoption of the distribution of their computational needs across the Edge-to-Cloud continuum is certainly a step in the right direction. However, the majority of the optimization solutions for placing the virtualized services so far focus on server selection, leaving other areas such as the impact of Non-Uniform Memory Access (NUMA) and CPU core selection underexplored. In this work, we herein formulate the problem of placing services as SFCs on an Edge/Cloud infrastructure, as a Mixed Integer Programming (MIP) problem. Then, we propose a heuristic algorithm called “Dynamic numa node Selection through Cores consolidation – DySCo" to solve it, which optimizes the placement in terms of server, NUMA and core selection. To the best of our knowledge, this is the first attempt to optimize network service placement in an Edge-Cloud interplay. Extensive simulation evaluation shows that DySCo is able to perform close to optimal while finding a solution in a real time fashion. Compared to a mix of baselines and modified solutions from the literature to treat this new problem, DySCo reduces on average the deployment cost by 17.53% and the delay by 28.88% for a given SFC. | 10.1109/TNSM.2024.3387275 |
Yuxing Tian, Lei Liu, Jie Feng, Qingqi Pei, Chen Chen, Jun Du, Celimuge Wu | Towards Robust and Generalizable Federated Graph Neural Networks for Decentralized Spatial-Temporal Data Modeling | 2024 | Early Access | Data models Servers Training Graph neural networks Message passing Sensors Predictive models Federated learning split learning graph neural network spatial-temporal forecasting | Federated learning has been combined with graph learning for modeling spatial-temporal data while maintaining data confidentiality and safety. However, there are still several issues: 1) In practical usage, some clients may be unable to participate in the model inference due to poor network signal, malicious attacks, etc. 2) In the communication process, the uploaded information is easily disturbed by noise. The performance of the graph model will be seriously affected by its low robustness. Additionally, the assumption of identical distribution between the training and testing domain does not hold in practical scenarios, resulting in overfitting and poor generalization ability of the trained models. 3) The relations that exist among clients may change dynamically over time and manually constructing the graph structure of clients may not accurately represent the relations among clients. In this paper, we address all the above limitations by proposing a robust hierarchical split-federated graph model named DCSFG. Specifically, DCSFG combines split-federated learning and spatial-temporal graph model to better capture the spatial-temporal dependencies. We propose a Dropclient method and introduce the uncertainty estimation to enhance the robustness and generlization ability of the model. We also design a dual-sub-decoders structure for clients so that they can perform predictions locally and independently when they are unable to participate in the inference process. A novel hierarchical graph message passing structure is proposed to enable each client to perceive the global and local information. The extensive experimental results demonstrate the effectiveness of DCSFG. | 10.1109/TNSM.2024.3386740 |
Gustavo Z. Bruno, Gabriel M. Almeida, Aditya Sathish, Aloízio P. da Silva, Luiz A. DaSilva Alexandre Huff, Kleber V. Cardoso, Cristiano B. Both | Evaluating the Deployment of a Disaggregated Open RAN Controller On a Distributed Cloud Infrastructure | 2024 | Early Access | Cloud computing Computer architecture Costs Data centers Task analysis Resource management Real-time systems near-RT RIC B5G cloud-native O-RAN disaggregation CNF placement | This article investigates the deployment of a Near-Real-Time Radio Access Network (RAN) Intelligent Controller (near-RT RIC) on a distributed cloud infrastructure composed of multiple physical sites with different amounts of resources and associated costs. The challenge is dynamically adapting the near-RT RIC deployment to the most cost-effective arrangement while meeting the latency requirements between the near-RT RIC and the controlled nodes. We introduce an optimization model to solve the disaggregated near-RT RIC placement problem, considering a cloud-native infrastructure to minimize the placement cost while satisfying the latency-sensitive control loop requirements across the cloud-edge continuum. Moreover, we describe an experimental environment we created using geographically disparate cloud sites. We present data detailing the latencies of the communication links among these sites and the costs incurred in using this real-world infrastructure. We conduct a performance evaluation of the near-RT RIC deployment, comparing the distributed approach versus a traditional monolithic strategy and evaluating positioning costs, deployment, setup and registration times, and the control loop latency considering three scenarios. Our results show that in a cloud-native environment, the disaggregated near-RT RIC allows cost savings of up to 60% in comparison to a monolithic near-RT RIC while satisfying the control loop latency and achieving time efficiency in terms of deployment and registration of xApps and near-RT RIC components. | 10.1109/TNSM.2024.3386902 |
Ru Huo, Xiangfeng Cheng, Chuang Sun, Tao Huang | A Cluster-Based Data Transmission Strategy for Blockchain Network in the Industrial Internet of Things | 2024 | Early Access | Blockchains Industrial Internet of Things Edge computing Data communication Computer architecture Topology Cloud computing Industrial Internet of Things (IIoT) blockchain edge computing clustering data transmission strategy | The proliferation of devices and data in the Industrial Internet of Things (IIoT) has rendered the traditional centralized cloud model unable to meet the stringent requirements of wide-scale and low latency in these IIoT scenarios. As emerging technologies, edge computing enables real-time processing and analysis on devices situated closer to the data source while reducing bandwidth requirements. Blockchain, being decentralized, could enhance data security. Therefore, edge computing and blockchain are integrated in IIoT to reduce latency and improve security. However, the inefficient data transmission of blockchain leads to increased transmission latency in the IIoT. To address this issue, we propose a cluster-based data transmission strategy (CDTS) for blockchain network. Initially, an improved weighted label propagation algorithm (WLPA) is proposed for clustering blockchain nodes. Subsequently, a spanning tree topology construction (STTC) is designed to simplify the blockchain network topology, based on the above node clustering results. Additionally, leveraging clustered nodes and tree topology, we propose a data transmission strategy to speed up data transmission. Simulation experiments show that CDTS effectively reduces data transmission time and better supports large-scale IIoT scenarios. | 10.1109/TNSM.2024.3387120 |
Mohammad Hossein Shokouhi, Mohammad Hadi, Mohammad Reza Pakravan | Mobility-Aware Computation Offloading for Hierarchical Mobile Edge Computing | 2024 | Early Access | Servers Cloud computing Task analysis Computer architecture Edge computing Optimization Costs Computation offloading Edge computing Mobile edge computing Mobility management Resource allocation | Mobile edge computing (MEC) is a promising technology that aims to reduce the total latency of user equipment (UE) by deploying computation resources at the edge of mobile networks. UE mobility is a challenging factor that causes the traditional MEC architecture to suffer from several issues, such as decreased efficiency and frequent service interruptions. One popular method to manage UE mobility is virtual machine (VM) migration, which requires high bandwidth and causes undesirable latency, rendering it impractical for real-time tasks with stringent latency requirements. This paper proposes a hierarchical architecture for MEC networks that facilitates mobility management and mitigates the need for VM migration. In order to utilize this architecture efficiently, a Markov chain-based predictive strategy is introduced to predict UE mobility. Afterward, an optimization problem is formulated to make the optimal long-term offloading decisions for UEs such that their expected cost is minimized subject to latency commitments and resource consumption constraints. Simulation results demonstrate that the proposed scheme reduces the cost of high-mobility UEs by up to 25% compared to traditional schemes. Furthermore, the measures of movement direction predictability and offloading decision popularity are introduced that provide insights into the behavior of the proposed and counterpart schemes. | 10.1109/TNSM.2024.3386845 |