Last updated: 2024-11-20 04:01 UTC
All documents
Number of pages: 130
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
Giannis Savva, Konstantinos Manousakis, Georgios Ellinas | A Network Coding Optimization Approach for Physical Layer Security in Elastic Optical Networks | 2024 | Early Access | Network Coding Security Routing and Spectrum Allocation Elastic Optical Networks | In this work, the network coding (NC) technique is used in combination with routing and spectrum allocation (RSA) to establish confidential connections in elastic optical networks and protect these connections against eavesdropping attacks. Utilizing NC, the signals of the confidential connections are encrypted using other signals at different nodes in their path while transmitted through the network, preventing an eavesdropper tapping a link to gain access to confidential information. A novel mixed integer linear program (MILP) is proposed to enable encrypted transmission of confidential connections, in combination with a mapping function that maximizes the security level provided for the largest possible set of confidential connections. Further, a heuristic approach is presented for the combined routing, spectrum, and network coding assignment (RSNCA) problem, along with a metaheuristic, for larger-sized networks. The proposed approaches are examined in terms of confidentiality, average number of encryption operations (EOs), spectrum utilization, and running times. Performance results demonstrate the applicability of the MILP formulations in providing optimal lightpath establishments that maximize physical layer security, with the proposed MILP-RSNCA approach achieving the best results in terms of the level of security and the average number of EOs provided, albeit utilizing more spectrum slots. Further, the metaheuristic exhibits results comparable to the MILP in terms of providing security, and can be also applied to large-size networks. In some scenarios, it can even provide the highest available level of security i.e., a minimum of 7 EOs per link per demand for more than 50% of the confidential demands. | 10.1109/TNSM.2024.3498108 |
Renchao Xie, Wen Wen, Wenzheng Wang, Qinqin Tang, Xiaodong Duan, Lu Lu, Tao Sun, Tao Huang, Fei Richard Yu | Incentive Mechanism Design for Trust-Driven Resources Trading in Computing Force Networks: Contract Theory Approach | 2024 | Early Access | Computing Force Networks reputation evaluation contract theory resources trading information asymmetry | Recently, Computing Force Networks (CFNs) have emerged to deeply integrate and flexibly schedule multi-layer, multi-domain, distributed, and heterogeneous computing force resources. CFNs build a resources trading platform between consumers and providers, facilitating efficient resource sharing. Therefore, resources trading is an important issue but it faces some challenges. Firstly, because all kinds of large-scale and small-scale resource providers are distributed in a wide area and the number of consumers is larger compared with edge/cloud computing scenarios, the credibility of consumers and providers is hard to guarantee. Secondly, due to market monopolies by large resource providers, fixed pricing strategies, and information asymmetry, both consumers and providers exhibit a low willingness to engage in resources trading. To solve these challenges, the paper proposes an incentive mechanism for trust-driven resources trading to guarantee trusted and efficient resources trading. We first design a trust guarantee scheme based on reputation evaluation, blockchain, and trust threshold setting. Then, the proposed incentive scheme can dynamically adjust prices and enable the platform to provide appropriate rewards based on providers’ classified types and contributions. We formulate an optimization problem aiming at maximizing the trading platform’s utility and obtaining an optimal contract based on individual rationality and incentive compatible constraints. Simulation results verify the feasibility and effectiveness of our scheme, highlighting its potential to reshape the future of computing resource management, increase overall economic efficiency, and foster innovation and competitiveness in the digital economy. | 10.1109/TNSM.2024.3490734 |
Hua Wang, YanXian Bi, Fulong Yan, Long Wang, Jie Zhang, Lena Wosinska | Multi-Beam Satellite Optical Networks: A Joint Time-Slot Resource Scheduling for End-to-End Services From a Networking Perspective | 2024 | Early Access | Satellites Laser beams Optical fiber networks Bandwidth Resource management Satellite broadcasting Microwave communication Hybrid transmission resources optical wavelength multibeam resource allocation satellite optical networks (SONs) | Satellite optical networks combined with multi-beam technologies can be referred to as multi-beam satellite optical networks (MB-SONs). These networks are expected to play a crucial role in satellite internet, potentially achieving Gigabit/s inter-satellite (IS) communication in the future. However, the growing demand for the satellite-to-mobile communications bring a challenge for making use of hybrid IS and SG transmission resources, which is worthy of studying. Given the different characteristics of IS links and satellite-to-ground (SG) links in terms of time windows and transmission capacities, existing solutions can hardly provide a suitable option for the optimal utilization of transmission capacity in MB-SONs. In this paper, we focus on the joint scheduling of optical wavelengths in IS and multiple beams in SG for the services sent from one ground station to the other ground station (which are referred to as end-to-end services). In response to the above-mentioned different characteristics of IS and SG links, we find that at least two constraints need to be followed in the joint scheduling. Also, the length of common time window will not exceed the minimum time window of IS and SG links on the path. The second one is that the volume of transmission capacity of a path depends on the length of CTW and the minimum bandwidth of IS and SG links, which should meet the service requirements. Considering these two constraints related to the length of CTW and the volume of transmission capacity, the main contributions of this paper can be concluded: i) defining the joint time slot allocation (JTSA) problem for multiple beams and wavelengths with different transmission capacities, ii) proposing an integer linear programming (ILP) model with object to minimize the number of time slots occupied by services, iii) designing the common time window for time slot assignment (CTW-TSA) algorithm as an option in practical implementations. The proposed ILP and CTW-TSA algorithm are evaluated by comparing the simulation results with the scheme using. The simulation of the CTW-TSA algorithm was compared to separate resource scheduling (SRS) without the store-and-forward function. The results showed a reduction of almost 0.201 in service blocking probability and an increase in average bandwidth utilization of about 0.159 for IS links and 0.164 for SG uplinks/downlinks. | 10.1109/TNSM.2024.3468376 |
Saifullah Khalid, Ishfaq Ahmad, Hansheng Lei | A Consortium Blockchain-Based Approach for Energy Sharing in Distribution Systems | 2024 | Early Access | Blockchain Distribution system Energy sharing Microgrid Optimization Proof-of-Welfare (PoWel) Service restoration Smart grid | Power network disruptions triggered by weather events, malfunction, sabotage, or other phenomena can leave harrowing effects on communities. Microgrids with distributed energy resources have the potential to enable swift localized restoration following the interruption of utility power. However, because of the limited generation and storage capacity of microgrids, service restoration will require prioritizing critical load and optimality of operations to render relief to those in dire need. Moreover, this essentially requires fair energy allocation, trust-free energy exchanges, and the integrity of transactions. This paper proposes a consortium blockchain-based energy sharing approach for service restoration using energy crowdsourced through donation or trade. The proposed approach provides a framework that utilizes microgrids’ supply situation and reputation as consortium admission criteria for optimizing the energy cost of blockchain operations. The proposed approach uses a measure called proof of welfare (PoWel), which solves the rationing problem to produce an energy allocation block. Accordingly, it proposes an algorithm by utilizing weighted rationing for prioritizing critical load restoration and an evolutionary optimization algorithm for maximizing social welfare and minimizing power losses. The winner block selected through consensus intrinsically preserves the network stability while conforming to resource and stability constraints. An extensive performance and security analysis ascertain the effectiveness of the proposed approach. | 10.1109/TNSM.2024.3501397 |
Erison Ballasheni, Dritan Nace, Artur Tomaszewski, Alban Zyle | A Probabilistic Optimisation Approach to the Equitable Controller Location Problem | 2024 | Early Access | SDN controller placement network resilience node attacks solution fairness | The ability of the Software-Defined Network (SDN) to transport traffic flows depends, in particular, on the SDN switches being able to communicate with SDN controllers, which are responsible for the setup of network connections and the configuration of switches. Since in principle the number of SDN controllers is limited they must be installed in a set of carefully selected node locations. Whereas the problem of controller placement is well defined and has been thoroughly studied and effectively solved for the nominal network state, it becomes difficult when the network is subject to attacks, especially as they can occur anywhere and anytime. In this paper we tackle the problem of controller placement resilient to network attacks considering probabilistic characterisation of attacks and equitable access of switches to controllers. We treat the problem as a specific facility location problem with the objective of providing resilience to network attacks, and analyze a couple of major solution fairness criteria. We derive a number of Mixed-Integer Linear Programming (MILP) problem models that exploit robust optimization and fair optimization concepts and techniques, and examine their effectiveness by means of a numerical study that uses five transport networks. | 10.1109/TNSM.2024.3500095 |
Peiran Zhong, Fanqin Zhou, Lei Feng, Wenjing Li | FINT: Freshness-based In-band Network-Wide Telemetry in Resource-constrained Environments | 2024 | Early Access | Age of information data plane programmability In-band Network Telemetry(INT) network management P4 | A new network monitoring technology called in-band network telemetry (INT) offers users the ability to gather precise, real-time data about the entire network. While several studies have used in-band network telemetry for network-wide monitoring, it is insufficient for the growing number of massive networks that are resource-constrained. Finding the most valuable information with the least amount of resources is challenging. In this paper, We formalize the problem of in-band network-wide telemetry with low resource overheads. This formalized problem aims to achieve high freshness of network performance data using fewer resources and reducing deployment and maintenance costs for O&M personnel. We propose a heuristic method based on path reorganization to address this issue. The heuristic algorithm for planning paths starts from greedy path planning results and finds a more appropriate planning scheme by merging and reorganizing paths. Additionally, probabilistic insertion is considered to reduce the impact of intrusiveness of in-band network telemetry. Simulation results show that our approach is effective in improving resource utilization and reducing the cost of monitoring the network compared to similar studies. | 10.1109/TNSM.2024.3500586 |
Madhu Donipati, Ankur Jaiswal, Abhishek Hazra, Nabajyoti Mazumdar, Jagpreet Singh | Optimizing UAV-Based Data Collection in IoT Networks With Dynamic Service Time and Buffer-Aware Trajectory Planning | 2024 | Early Access | IoT Buffer-Awareness Unmanned Aerial Vehicles Data Gathering Time-windows | Unmanned Aerial Vehicles (UAVs) have become vital tools for data collection in Internet of Things (IoT) networks, enabling efficient monitoring and information acquisition across various domains. However, UAV-assisted IoT networks often face significant challenges such as high data loss, latency, and resource inefficiency due to inadequate buffer management and dynamic service time (DST) allocation for IoT nodes. Existing approaches frequently overlook critical factors such as IoT nodes’ energy levels, UAVs’ energy constraints, and the dynamic data generation rates of IoT devices. To address these challenges, this article introduces a novel strategy for dynamically optimizing UAV trajectories by integrating real-time data from ground nodes and UAV energy levels. Given the DST and buffer constraints (BC) of IoT devices, optimizing UAV trajectories for data collection is a complex problem. We propose an optimization framework that strategically plans UAV trajectories to minimize service time at designated Rendezvous Points (RPs) and bypasses RPs when necessary to reduce the overall trajectory path, taking into account buffer status and dynamic service time adjustments. To efficiently solve this optimization problem, we employ a meta-heuristic technique known as Path Cheapest Arc with Guided Local Search (PCA-GLS) for UAV route planning within predefined time windows. Extensive simulations demonstrate the effectiveness of our proposed solution in optimizing UAV trajectories and improving data collection performance compared to existing algorithms such as BA-UAV, ACO-MS, and NSGA-II. | 10.1109/TNSM.2024.3500778 |
Min An, Xuan Zhang, Jishu Wang, Qiyuan Fan, Chen Gao, Linyu Li, Cuizhen Lu, Nan Li, Yingchen Liu | RLChain: A DRL Approach for Blockchain Performance Optimization Towards IIoT | 2024 | Early Access | Blockchain deep reinforcement learning (DRL) Industrial Internet of Things (IIoT) performance optimization scalability latency | With the development of communication technology and Internet of Things, Industrial Internet of Things (IIoT) is proposed in the automation industry for complex scenarios. Blockchain is applied in IIoT to solve data security and privacy issues related to centralized data storage and processing. However, there are inevitably performance issues with throughput constraints when blockchain manages large amounts of device data. This paper proposes a blockchain-supported performance optimization framework for IIoT systems using deep reinforcement learning (DRL) methods. We model the blockchain performance optimization problem as a Markov decision process that optimizes the blockchain’s throughput by dynamically adjusting the block size and interval through DRL while satisfying security constraints. We use the double deep Q-network (DDQN) to deal with the dynamic and complexity of optimization problems due to the heterogeneity of equipment and diversified requirements. We also alleviate the overestimation problem caused by DQN. Meanwhile, we study the impact of the number of network layers and different activation units on the performance optimization method in DDQN. Finally, we prove that our work is feasible and effective through the case study based on actual IIoT scenario datasets. Experimental results demonstrate that our proposed scheme enhances blockchain performance in IIoT systems. The detailed qualitative comparison with related work demonstrates the superiority and innovation of our work and proves that it improves the shortcomings of existing work. | 10.1109/TNSM.2024.3499746 |
Canghai Wu, Lingdan Chen, Huanliang Xiong, Jianwen Hu | USMN-SCA: A Blockchain Sharding Consensus Algorithm With Tolerance for an Unlimited Scale of Malicious Nodes | 2024 | Early Access | Blockchain consensus mechanism malicious nodes sharding | The emergence of malicious nodes in blockchain networks poses a serious threat to the integrity and security of these systems. Malicious activities can disrupt the consensus process and compromise the overall security of the network. Although traditional consensus mechanisms have proven effective to some extent, they often struggle to tolerate many malicious nodes, which can lead to network instability or even failure. In response to these challenges, this paper presents a novel blockchain sharding consensus algorithm designed to withstand an unlimited scale of malicious nodes (USMN-SCA). Our approach leverages a dynamic credit mechanism to evaluate node behavior and categorize nodes into different credit levels. By dividing the blockchain network into credit-based shards and isolating low-credit nodes, we mitigate the impact of malicious activities and improve the network’s resilience. USMN-SCA employs a two-stage consensus process: first, within each shard, and then between shards. This ensures that consensus is reached efficiently while maintaining a high level of security. We implement the USMN-SCA algorithm in a prototype via blockchain sharding and deploy it on the Alibaba Cloud. The experimental results show that our protocol significantly outperforms existing methods in terms of security, with a success probability of 100% in a real-world web environment, even more then 50% of the nodes are malicious. Additionally, the consensus delay and throughput performance are comparable to those of other state-of-the-art consensus algorithms. These findings establish the USMN-SCA as a cutting-edge solution with high applicability, scalability, and compatibility across various blockchain platforms. This work represents a significant advancement in blockchain security and performance optimization, paving the way for more secure and efficient blockchain applications. | 10.1109/TNSM.2024.3498594 |
Merim Dzaferagic, Bruno Missi Xavier, Diarmuid Collins, Vince D’Onofrio, Magnos Martinello, Marco Ruffini | ML-Based Handover Prediction Over a Real O-RAN Deployment Using RAN Intelligent Controller | 2024 | Early Access | Handover Optimization Costs Open RAN Business Quality of service Machine learning O-RAN Machine Learning Testbed Handover User mobility | O-RAN introduces intelligent and flexible network control in all parts of the network. The use of controllers with open interfaces allow us to gather real time network measurements and make intelligent/informed decision. The work in this paper focuses on developing a use-case for open and reconfigurable networks to investigate the possibility to predict handover events and understand the value of such predictions for all stakeholders that rely on the communication network to conduct their business. We propose a Long-Short Term Memory Machine Learning approach that takes standard Radio Access Network measurements to predict handover events. The models were trained on real network data collected from a commercial O-RAN setup deployed in our OpenIreland testbed. Our results show that the proposed approach can be optimized for either recall or precision, depending on the defined application level objective. We also link the performance of the Machine Learning (ML) algorithm to the network operation cost. Our results show that ML-based matching between the required and available resources can reduce operational cost by more than 80%, compared to long term resource purchases. | 10.1109/TNSM.2024.3468910 |
Renato S. Silva, Luís Felipe M. de Moraes | GonoGo -Assessing the Confidence Level of Distribute Intrusion Detection Systems Alarms Based on BGP | 2024 | Early Access | Border Gateway Protocol Internet Routing Security Intrusion detection Machine learning Data models Routing protocols IP networks Grippers DIDS Machine Learning BGP Distributed Intrusion Detection System | Although Border Gateway Protocol – BGP is increasingly becoming a multi-purpose protocol, it suffers from security issues regarding bogus announcements for malicious goals. Some of these security breaches are particularly critical for distributed intrusion detection systems that use BGP as their underlay network for interchanging alarms. In this case, assessing the confidence level of these BGP messages helps to prevent internal attacks. Most of the proposals addressing the confidence level of BGP messages rely on complex and time-consuming mechanisms that can also be a potential target for intelligent attacks. In this paper, we propose Gonogo as an out-of-band system based on machine learning to infer the confidence level of the intrusion alarms using just the mandatory header of each BGP message that transports them. Tests using a synthetic data set reflecting the indirect effects of a widespread worm attack over the BGP network show promising results, considering well-known performance metrics, such as recall, accuracy, receiver operating characteristics (ROC), and f1-score. | 10.1109/TNSM.2024.3468907 |
Xinping Rao, Le Qin, Yugen Yi, Jin Liu, Gang Lei, Yuanlong Cao | A Novel Adaptive Device-Free Passive Indoor Fingerprinting Localization Under Dynamic Environment | 2024 | Early Access | Location awareness Fingerprint recognition Accuracy Feature extraction Training Adaptation models Databases Wireless fidelity Bayes methods Support vector machines Channel State Information (CSI) Convolutional Neural Network (CNN) Domain Adaptation Indoor Device-free Passive Localization | In recent years, indoor localization has attracted a lot of interest and has become one of the key topics of Internet of Things (IoT) research, presenting a wide range of application scenarios. With the advantages of ubiquitous universal Wi-Fi platforms and the "unconscious collaborative sensing" in the monitored target, Channel State Information (CSI)-based device-free passive indoor fingerprinting localization has become a popular research topic. However, most existing studies have encountered the difficult issues of high deployment labor costs and degradation of localization accuracy due to fingerprint variations in real-world dynamic environments. In this paper, we propose BSWCLoc, a device-free passive fingerprint localization scheme based on the beyond-sharing-weights approach. BSWCLoc uses the calibrated CSI phases, which are more sensitive to the target location, as localization features and performs feature processing from a two-dimensional perspective to ultimately obtain rich fingerprint information. This allows BSWLoc to achieve satisfactory accuracy with only one communication link, significantly reducing deployment consumption. In addition, a beyond-sharing-weights (BSW) method for domain adaptation is developed in BSWCLoc to address the problem of changing CSI in dynamic environments, which results in reduced localization performance. The BSW method proposes a dual-flow structure, where one flow runs in the source domain and the other in the target domain, with correlated but not shared weights in the adaptation layer. BSWCLoc greatly exceeds the state-of-the-art in terms of positioning accuracy and robustness, according to an extensive study in the dynamic indoor environment over 6 days. | 10.1109/TNSM.2024.3469374 |
Nguyen Xuan Tung, Trinh Van Chien, Dinh Thai Hoang, Won Joo Hwang | Jointly Optimizing Power Allocation and Device Association for Robust IoT Networks under Infeasible Circumstances | 2024 | Early Access | Dual-objective optimization IoT service management power allocation device association | Jointly optimizing power allocation and device association is crucial in Internet-of-Things (IoT) networks to ensure devices achieve their data throughput requirements. Device association, which assigns IoT devices to specific access points (APs), critically impacts resource allocation. Many existing works often assume all data throughput requirements are satisfied, which is impractical given resource limitations and diverse demands. When requirements cannot be met, the system becomes infeasible, causing congestion and degraded performance. To address this problem, we propose a novel framework to enhance IoT system robustness by solving two problems, comprising maximizing the number of satisfied IoT devices and jointly maximizing both the number of satisfied devices and total network throughput. These objectives often conflict under infeasible circumstances, necessitating a careful balance. We thus propose a modified branch-and-bound (BB)-based method to solve the first problem. An iterative algorithm is proposed for the second problem that gradually increases the number of satisfied IoT devices and improves the total network throughput. We employ a logarithmic approximation for a lower bound on data throughput and design a fixed-point algorithm for power allocation, followed by a coalition game-based method for device association. Numerical results demonstrate the efficiency of the proposed algorithm, serving fewer devices than the BB-based method but with faster running time and higher total throughput. | 10.1109/TNSM.2024.3501737 |
Sotirios T. Spantideas, Anastasios E. Giannopoulos, Panagiotis Trakadas | Smart Mission Critical Service Management: Architecture, Deployment Options, and Experimental Results | 2024 | Early Access | 5G mobile communication Emergency services Load forecasting LSTM Machine learning Mission critical systems Network slicing Resource management | Current and upcoming data-intensive Mission Critical (MC) applications rely on high Quality of Service (QoS) requirements related to connectivity, latency and network reliability. Beyond 5G networks shall accommodate MC services that enable voice, data and video transfer in extreme circumstances, for instance in occurrence of network overloads or infrastructure failures. In this work, we describe the specifications of the architectural framework that enables the roll-out of MC services over 5G networks and beyond, considering recent technological advancements of cloud-native functionalities, network slicing and edge deployments. The network architecture and the deployment process is described in three practical scenarios, including a capacity increase in the service load that necessitates the scaling of the computational resources, the deployment of a dedicated network slice for accommodating the stringent requirement of a MC application and a service migration scenario at the edge to cope with critical failures and QoS degradation. Furthermore, we illustrate the implementation of a Machine Learning (ML) algorithm that is used for overload prediction, validating its ability to predict the capacity increase and notify the components responsible to trigger the appropriate actions, based on a real dataset. To this end, we mathematically define the overload detection problem, as well as generalized prediction tasks in emergency situations and examine the key parameters (proactiveness ability, loockback window, etc.) of the ML model, also comparing its predictions abilities (~93% accuracy in overload detection) against multiple baseline classifiers. Finally, we demonstrate the flexibility of the ML model to achieve reliable predictions in scenarios with diverse requirements. | 10.1109/TNSM.2024.3498348 |
Madhura Adeppady, Alberto Conte, Paolo Giaccone, Holger Karl, Carla Fabiana Chiasserini | Dynamic Management of Constrained Computing Resources for Serverless Services | 2024 | Early Access | Edge Computing Cloud Computing Services Energy-aware Management Orchestration | In resource-constrained cloud systems, e.g., at the network edge or in private clouds, serverless computing is increasingly adopted to deploy microservices-based applications, leveraging its promised high resource efficiency. Provisioning resources to serverless services, however, poses several challenges, due to the high cold-start latency of containers and stringent Service Level Agreement (SLA) requirements of the microservices. In response, we investigate the behavior of containers in different states (i.e., running, warm, or cold) and exploit our experimental observations to formulate an optimization problem that minimizes the energy consumption of the active servers while reducing SLA violations. In light of the problem complexity, we propose a low-complexity algorithm, named AiW, which utilizes a multi-queueing approach to balance energy consumption and system performance by reusing containers effectively and invoking cold-starts only when necessary. To further minimize the energy consumption of data centers, we introduce the two-timescale COmputing resource Management at the Edge (COME) framework, comprising an orchestrator running our proposed AiW algorithm for container provisioning and Dynamic Server Provisioner (DSP) for dynamically activating/deactivating servers in response to AiW’s decisions on request scheduling. COME addresses the mismatch in timescales for resource provisioning decisions at the container and server levels. Extensive performance evaluation through simulation shows AiW’s close match to the optimum and COME’s significant reduction in power consumption by 22–64% compared state-of-the-art alternatives. | 10.1109/TNSM.2024.3497155 |
Amit Dilip Patil, Abdorasoul Ghasemi, Hermann de Meer | Optimal Redundant Sensor Placement for Protection Blinding in Active Distribution Grids | 2024 | Early Access | Genetic algorithm redundancy uncertainty adaptive protection | Integrating renewable energy sources into the power distribution grid challenges protection system operation, leading to protection blinding when circuit breakers fail to trip due to fault current contribution from these sources. Communication-based adaptive protection can address this issue, but communication system components like sensors can fail. This research proposes a genetic algorithm-based approach to optimally place redundant sensors, minimizing protection blinding under communication uncertainty within a redundancy budget. The fault tolerance, measured by a new metric called redundancy degree, reflects the number of redundant components deployed. Results demonstrate the algorithm’s effectiveness in optimizing redundant sensor locations, reducing system costs, and improving fault tolerance. For the system and scenarios investigated, an average of 60% redundant sensors are relocated, reducing the average protection trip time by 36.65% compared to a baseline approach that does not consider communication uncertainty. This encourages incorporating communication component failure considerations in power system planning. | 10.1109/TNSM.2024.3497296 |
Yingying Wu, Bomin Mao, Nei Kato | MSFL: Model-Safeguarded Federated Learning With Intelligent Reflecting Surface for Industrial Networks | 2024 | Early Access | Servers NOMA Training Optimization Data models Noise Federated learning Eavesdropping Vectors Wireless communication Industry 4.0 Federated Learning (FL) Intelligent Reflecting Surface (IRS) Non-Orthogonal Multiple Access (NOMA) confidentiality capacity | Industry 4.0 generates a huge volume of data, where Federated Learning (FL) can be utilized to mine the data in a privacy-preserving manner. However, traditional FL in privacy-preserving is not sufficient, the uploaded local model gradients can be intercepted by external Eavesdroppers (Eve), with more enough of which the users’ raw data can be inferred, leading to privacy leakage. At the same time, the future 6G accommodating more devices, makes privacy concerns sharper. To tackle privacy issues in FL, in this paper, we propose a Model-Safeguarded FL framework based on Intelligent Reflecting Surface (IRS) (MSFL) where Non-Orthogonal Multiple Access (NOMA) is introduced to enable multiple devices access. Specifically, IRS is deployed between Base Station (BS) and participants, improving the wireless environment and preventing Eve from eavesdropping. The Deep Deterministic Policy Gradient (DDPG)-Optimized Power and Phase (DOPP) algorithm is proposed to jointly optimize transmission power at participants and IRS phase shift to maximize the minimum confidentiality capacity. Extensive results demonstrate that the maximum confidentiality capacity of our MSFL scheme is up to 1.7 bps/Hz at a transmission rate of 30 dBW, which is approximately 300% more than that of the Block Coordinate Ascent Method (BCAM) and Artificial Noise (AN). | 10.1109/TNSM.2024.3496502 |
Giovanni Rosa da Silva, Aldri Luiz dos Santos | Adaptive Access Control for Smart Homes Supported by Zero Trust for User Actions | 2024 | Early Access | Security Smart homes Servers Authorization Impersonation attacks Data privacy Smart devices Protocols Privacy Authentication Zero Trust Access Control Adaptive Smart Home Privacy User Actions | Although smart homes have recently become popular, people are still highly concerned about security, safety, and privacy issues. Particularly, the security system requirements for smart homes should include privacy perception, low latency in response, spatial and temporal locality, resource extensibility, protection against impersonation, resource isolation, access control enforcement, and taking into account the refresh verification with a trustworthy system. In this paper, we propose the ZASH (Zero-Aware Smart Home) system to provide access control for the user’s actions on smart devices in smart homes. ZASH is based on continuous authentication leveraged by Zero Trust (ZT), context-aware, and user behavior. We implemented ZASH in the ns-3 network simulator and analyzed its robustness, efficiency, extensibility, and performance. According to our analysis, ZASH protects users’ privacy, responds quickly (around 4.16 ms), copes with adding and removing devices, blocks most impersonation attacks (up to 99% with a proper configuration), isolates smart devices, and enforces access control for all interactions. | 10.1109/TNSM.2024.3492379 |
Soyi Jung, Jae-Hyun Kim, Joongheon Kim | Intelligent Extra Resource Allocation for Cooperative Awareness Message Broadcasting in Cellular-V2X Networks | 2024 | Early Access | Resource management 3GPP Sidelink Sensors Fluctuations Wireless sensor networks Wireless communication Vehicle-to-everything Heuristic algorithms Cams Cellular-V2X LTE-V2X Mode 4 sidelink cooperative awareness message (CAM) broadcasting resource allocation resource collision | According to recent advances in cellular vehicle-to-everything (cellular-V2X) communication networks, vehicle-to-vehicle (V2V) networks can be performed without infrastructure support. The corresponding wireless standard has been actively studied by the 3rd generation partnership project (3GPP) and has been proposed to realize V2V communications via sidelink interfaces. This standard specifically introduces cellular-V2X Mode 4, where individual vehicles access wireless resources in a distributed manner by using sensing-based semi-persistent scheduling (SPS) for avoiding collisions in cooperative awareness messages (CAMs). However, the legacy SPS scheme is able to be challenged by resource scheduling collisions, which obviously degrade performance. To tackle this problem, this paper proposes intelligent adaptive and additive extra resource allocation strategy that responds in real time to estimate traffic density fluctuations, aiming to decrease resource collisions as well as improve utilization performance. Moreover, our additional intelligent solution, which is fundamentally inspired by Lyapunov optimization-based drift-plus-penalty framework, dynamically controls the number of resources to minimize transmission outage probability subject to resource constraints. The corresponding data-intensive performance analysis results verify that the proposed algorithm is able to improve performance compared to the other benchmarks. | 10.1109/TNSM.2024.3496394 |
Aroosa Hameed, John Violos, Nina Santi, Aris Leivadeas, Nathalie Mitton | FeD-TST: Federated Temporal Sparse Transformers for QoS prediction in Dynamic IoT Networks | 2024 | Early Access | Quality of service Internet of Things Data models Throughput Predictive models Forecasting Transformers Delays Biological system modeling Accuracy Internet of Things QoS forecasting Edge Computing Federated Learning | Internet of Things (IoT) applications generate tremendous amounts of data streams which are characterized by varying Quality of Service (QoS) indicators. These indicators need to be accurately estimated in order to appropriately schedule the computational and communication resources of the access and Edge networks. Nonetheless, such types of IoT data may be produced at irregular time instances, while suffering from varying network conditions and from the mobility patterns of the edge devices. At the same time, the multipurpose nature of IoT networks may facilitate the co-existence of diverse applications, which however may need to be analyzed separately for confidentiality reasons. Hence, in this paper, we aim to forecast time series data of key QoS metrics, such as throughput, delay, packet delivery and loss ratio, under different network configuration settings. Additionally, to secure data ownership while performing the QoS forecasting, we propose the FeDerated Temporal Sparse Transformer (FeD-TST) framework, which allows local clients to train their local models with their own QoS dataset for each network configuration; subsequently, an associated global model can be updated through the aggregation of the local models. In particular, three IoT applications are deployed in a real testbed under eight different network configurations with varying parameters including the mobility of the gateways, the transmission power and the channel frequency. The results obtained indicate that our proposed approach is more accurate than the identified state-of-the-art solutions. | 10.1109/TNSM.2024.3493758 |