Last updated: 2024-09-09 03:01 UTC
All documents
Number of pages: 126
Author(s) | Title | Year | Publication | Keywords | ||
---|---|---|---|---|---|---|
ZengRi Zeng, Xuhui Liu, Ming Dai, Jian Zheng, Xiaoheng Deng, Detian Zeng, Jie Chen | Causal Genetic Network Anomaly Detection Method for Imbalanced Data and Information Redundancy | 2024 | Early Access | Causal genetics Network anomaly detection Causal intervention Detection balance Optimal feature subset | The proliferation of internet-connected devices and the complexity of modern network environments have led to the collection of massive and high-dimensional datasets, resulting in substantial information redundancy and sample imbalance issues. These challenges not only hinder the computational efficiency and generalizability of anomaly detection systems but also compromise their ability to detect rare attack types, posing significant security threats. To address these pressing issues, we propose a novel causal genetic network-based anomaly detection method, the CNSGA, which integrates causal inference and the nondominated sorting genetic algorithm-III (NSGA-III). The CNSGA leverages causal reasoning to exclude irrelevant information, focusing solely on the features that are causally related to the outcome labels. Simultaneously, NSGA-III iteratively eliminates redundant information and prioritizes minority samples, thereby enhancing detection performance. To quantitatively assess the improvements achieved, we introduce two indices: a detection balance index and an optimal feature subset index. These indices, along with the causal effect weights, serve as fitness metrics for iterative optimization. The optimized individuals are then selected for subsequent population generation on the basis of nondominated reference point ordering. The experimental results obtained with four real-world network attack datasets demonstrate that the CNSGA significantly outperforms existing methods in terms of overall precision, the imbalance index, and the optimal feature subset index, with maximum increases exceeding 10%, 0.5, and 50%, respectively. Notably, for the CICDDoS2019 dataset, the CNSGA requires only 16-dimensional features to effectively detect more than 70% of all sample types, including 6 more network attack sample types than the other methods detect. The significance and impact of this work encompass the ability to eliminate redundant information, increase detection rates, balance attack detection systems, and ensure stability and generalizability. The proposed CNSGA framework represents a significant step forward in developing efficient and accurate anomaly detection systems capable of defending against a wide range of cyber threats in complex network environments. | 10.1109/TNSM.2024.3455768 |
Binbin Lu, Yuan Wu, Liping Qian, Sheng Zhou, Haixia Zhang, Rongxing Lu | Multi-Agent DRL-Based Two-Timescale Resource Allocation for Network Slicing in V2X Communications | 2024 | Early Access | Resource management Quality of service Network slicing Heuristic algorithms Approximation algorithms Bandwidth Protection Network slicing V2X communications multiagent deep reinforcement learning resource allocations | Network slicing has been envisioned to play a crucial role in supporting various vehicular applications with diverse performance requirements in dynamic Vehicle-to-Everything (V2X) communications systems. However, time-varying Service Level Agreements (SLAs) of slices and fast-changing network topologies in V2X scenarios may introduce new challenges for enabling efficient inter-slice resource provisioning to guarantee the Quality of Service (QoS) while avoiding both resource over-provisioning and under-provisioning. Moreover, the conventional centralized resource allocation schemes requiring global slice information may degrade the data privacy provided by dedicated resource provisioning. To address these challenges, in this paper, we propose a two-timescale resource management mechanism for providing diverse V2X slices with customized resources. In the long timescale, we propose a Proximal Policy Optimization-based multi-agent deep reinforcement learning algorithm for dynamically allocating bandwidth resources to different slices for guaranteeing their SLAs. Under the coordination of agents, each agent only observes its partial state space rather than the global information to adjust the resource requests, which can enhance the privacy protection. Moreover, an expert demonstration mechanism is proposed to guide the action policy for reducing the invalid action exploration and accelerating the convergence of agents. In the short-term time slot, with our proposed Cross Entropy and Successive Convex Approximation algorithm, each slice allocates its available physical resource blocks and optimizes its transmit power to meet the QoS. Simulation results show our proposed two-timescale resource allocation scheme for network slicing can achieve maximum 8.4% performance gains in terms of spectral efficiency while guaranteeing the QoS requirements of users compared to the baseline approaches. | 10.1109/TNSM.2024.3454758 |
Hansini Vijayaraghavan, Wolfgang Kellerer | MobiFi: Mobility-Aware Reactive and Proactive Wireless Resource Management in LiFi-WiFi Networks | 2024 | Early Access | Resource management Light fidelity Wireless fidelity Wireless communication Optimization Handover Predictive models light fidelity (LiFi) mobility proactive resource allocation wireless fidelity (WiFi) | This paper presents MobiFi, a framework addressing the challenges in managing LiFi-WiFi heterogeneous networks focusing on mobility-aware resource allocation. Our contributions include introducing a centralized framework incorporating reactive and proactive strategies for resource management in mobile LiFi-only and LiFi-WiFi networks. This framework reacts to current network conditions and proactively anticipates the future, considering user positions, line-of-sight blockages, and channel quality. Recognizing the importance of long-term network performance, particularly for use cases such as video streaming, we tackle the challenge of optimal proactive resource allocation by formulating an optimization problem that integrates access point assignment and wireless resource allocation using the alpha-fairness objective over time. Our proactive strategy significantly outperforms the reactive resource allocation, ensuring 7.7% higher average rate and 63.3% higher minimum user rate for a 10-user LiFi-WiFi network. We employ sophisticated techniques, including a Branch and Bound-based Mixed-Integer solver and a low-complexity, Evolutionary Game Theory-based algorithm to achieve this. Lastly, we introduce a novel approach to simulate errors in predictive user position modeling to assess the robustness of our proactive allocation strategy against real-world uncertainties. The contributions of MobiFi advance the field of resource management in mobile LiFi-WiFi networks, enabling efficiency and reliability. | 10.1109/TNSM.2024.3455105 |
Yufei An, F. Richard Yu, Ying He, Jianqiang Li, Jianyong Chen, Victor C.M. Leung | A Deep Learning System for Detecting IoT Web Attacks With a Joint Embedded Prediction Architecture (JEPA) | 2024 | Early Access | Internet of Things Feature extraction Security Uniform resource locators Data models Service-oriented architecture Robustness Internet of things (IoT) web attack joint embedded prediction architecture (JEPA) deep learning | The advancement of Internet of Things (IoT) technology has significantly transformed the dynamic between humans and devices, as well as device-to-device interactions. This paradigm shift has led to profound changes in human lifestyles and production processes. Through the interconnectedness of numerous sensors and controllers via networks, the IoT facilitates the seamless integration of humans with diverse devices, leading to substantial economic advantages. Nevertheless, the burgeoning IoT industry and the rapid proliferation of various IoT devices have also introduced a multitude of security vulnerabilities. Cyber attackers frequently exploit cyber attacks to compromise IoT devices, jeopardizing user privacy and property security, thereby posing a grave menace to the overall security of the IoT ecosystem. In this paper, we propose a novel IoT web attack detection system based on a joint embedded prediction architecture (JEPA), which effectively alleviates the security issues faced by IoT. It can obtain high-level semantic features in IoT traffic data through non-generative self-supervised learning. These features can more effectively distinguish normal data from attack data and help improve the overall detection performance of the system. Moreover, we propose a feature interaction module based on a dual-branch network, which effectively fuses low-level features and high-level features, and comprehensively aggregates global features and local features. Simulation results on multiple datasets show that our proposed system has better detection performance and robustness. | 10.1109/TNSM.2024.3454777 |
Yingya Guo, Bin Lin, Qi Tang, Yulong Ma, Huan Luo, Han Tian, Kai Chen | Distributed Traffic Engineering in Hybrid Software Defined Networks: A Multi-Agent Reinforcement Learning Framework | 2024 | Early Access | Routing Reinforcement learning Virtual environments Heuristic algorithms Software Training Optimization Distributed Traffic Engineering Imitation Learning Reinforcement Learning Transformer Network-Wide Guidance | Traffic Engineering (TE) is an efficient technique to balance network flows and thus improves the performance of a hybrid Software Defined Network (SDN). Previous TE solutions mainly leverage heuristic algorithms to centrally optimize link weight setting or traffic splitting ratios under the static traffic demand. Note that as the network scale becomes larger and network management gains more complexity, it is notably that the centralized TE methods suffer from a high computation overhead and a long reaction time to optimize routing of flows when the network traffic demand dynamically fluctuates or network failures happen. To enable adaptive and efficient routing in distributed TE, we propose a Multi-agent Reinforcement Learning method CMRL that divides the routing optimization of a large network into multiple small-scale routing decision-making problems. To coordinate the multiple agents for achieving a global optimization goal in a hybrid SDN scenario, we construct a reasonable virtual environment to meet different routing constraints brought by legacy routers and SDN switches for training the routing agents. To train the routing agents for determining the local routing policies according to local network observations, we introduce the difference reward assignment mechanism for encouraging agents to cooperatively take optimal routing action. Extensive simulations conducted on the real traffic traces demonstrate the superiority of CMRL in improving TE performance, especially when traffic demands change or network failures happen. | 10.1109/TNSM.2024.3454282 |
Mesfin Leranso Betalo, Supeng Leng, Hayla Nahom Abishu, Abegaz Mohammed Seid, Maged Fakirah, Aiman Erbad, Mohsen Guizani | Multi-Agent DRL-Based Energy Harvesting for Freshness of Data in UAV-Assisted Wireless Sensor Networks | 2024 | Early Access | Wireless sensor networks Autonomous aerial vehicles Energy efficiency Optimization Data collection Internet of Things Trajectory Deep reinforcement learning Energy harvesting Laser technology Mobile edge computing Unmanned aerial vehicles wireless sensor networks | In sixth-generation (6G) networks, unmanned aerial vehicles (UAVs) are expected to be widely used as aerial base stations (ABS) due to their adaptability, low deployment costs, and ultra-low latency responses. However, UAVs consume large amounts of power to collect data from multiple sensor nodes (SNs). This can limit their flight time and transmission efficiency, resulting in delays and low information freshness. In this paper, we present a multi-access edge computing (MEC)-integrated UAV-assisted wireless sensor network (WSN) with a laser technology-based energy harvesting (EH) system that makes the UAV act as a flying energy charger to address these issues. This work aims to minimize the age of information (AoI) and improve energy efficiency by jointly optimizing the UAV trajectories, EH, task scheduling, and data offloading. The joint optimization problem is formulated as a Markov decision process (MDP) and then transformed into a stochastic game model to handle the complexity and dynamics of the environment. We adopt a multi-agent deep Q-network (MADQN) algorithm to solve the formulated optimization problem. With the MADQN algorithm, UAVs can determine the best data collection and EH decisions to minimize their energy consumption and efficiently collect data from multiple SNs, leading to reduced AoI and improved energy efficiency. Compared to the benchmark algorithms such as deep deterministic policy gradient (DDPG), Dueling DQN, asynchronous advantage actor-critic (A3C) and Greedy, the MADQN algorithm has a lower average AoI and improves energy efficiency by 95.5%, 89.9%, 78.02% and 65.52% respectively. | 10.1109/TNSM.2024.3454217 |
YunLong Deng, Tao Peng, BangChao Wang, Gan Wu | ANDE: Detect the Anonymity Web Traffic With Comprehensive Model | 2024 | Early Access | Peer-to-peer computing Feature extraction Telecommunication traffic Deep learning Network security Data models Servers Anonymity web Traffic classification Deep learning Squeeze-and-Excitation networks | The escalating growth of network technology and users poses critical challenges to network security. This paper introduces ANDE, a novel framework designed to enhance the classification accuracy of anonymity networks. ANDE incorporates both raw data features and statistical features extracted from network traffic. Raw data features are transformed into images, enabling recognition and classification using robust image domain models. ANDE combines an enhanced Squeeze-and-Excitation (SE) ResNet with Multilayer Perceptrons (MLP), facilitating concurrent learning and classification of both feature types. Extensive experiments on two publicly available datasets demonstrate the superior performance of ANDE compared to traditional machine learning and deep learning methods. The comprehensive evaluation underscores ANDE’s effectiveness in accurately classifying network traffic within anonymity networks. Additionally, this study empirically validates the efficacy of the SE block in augmenting the classification capabilities of the proposed framework, establishing ANDE as a promising solution for network traffic classification in the realm of network security. | 10.1109/TNSM.2024.3453917 |
Yashar Farzaneh Yeznabad, Markus Helfert, Gabriel-Miro Muntean | QoE-Driven Cross-Layer Bitrate Allocation Approach for MEC-Supported Adaptive Video Streaming | 2024 | Early Access | Quality of experience Streaming media Bit rate Servers Throughput Resource management Quality of service Adaptive video streaming Multi-Access Edge Computing (MEC) Resource allocation Quality of Experience (QoE) | The Software-Defined Mobile Network (SDMN), Multi-Access Edge Computing (MEC), Cloud RAN (C-RAN), and Network Slicing are the promising solutions that have been defined for the next generation of the wireless mobile networks in order to fulfill the increasing Quality of Experience (QoE) demand of the mobile users and the Quality of Service (QoS) concerns of high-performance, innovative services. In today’s complex telecommunications network, coupled with continuous traffic growth, and users’ demand for higher speeds, it is vital for mobile operators to allocate their available resources efficiently. This paper focuses on the joint resource allocation problem of delivering adaptive video streams to users located in different slices of a wireless network enabled by MEC, SDMN, and C-RAN technologies. It proposes a novel Cross-Layer QoE-Driven Bitrate Allocation (CLQDBA) algorithm, that aims to improve system utilization by using information from the higher layers regarding traffic patterns and desired video quality of HTTP Adaptive Streaming (HAS) users. The mixed-integer nonlinear program is formulated, taking into account network slice requirements, radio resource limitations, storage and transcoding capacity of MEC servers, and users’ quality of experience. CLQDBA is a low complexity greedy-based algorithm aims to maximize users’ quality of experience (QoE) and minimize the deviation between the achievable throughput at the MAC-layer for users and the value of allocated bit rates for video frames at the application layer. The simulation result shows that compared to the baseline scheme, our introduced algorithm, on average, achieves a 15% higher system utilization, 17% higher video quality, and 13% improvement of Jain’s Fairness index for HAS users. | 10.1109/TNSM.2024.3453992 |
Bowen Bao, Hui Yang, Qiuyan Yao, Jie Zhang, Bijoy Chand Chatterjee, Eiji Oki | Node-Oriented Slice Reconfiguration Based on Spatial and Temporal Traffic Prediction in Metro Optical Networks | 2024 | Early Access | Optical fiber networks Predictive models Resource management Quality of service Prediction algorithms Computational modeling Solid modeling Metro optical network traffic prediction GCN-GRU slice reconfiguration gradient-based priority | Given the spring-up of diverse new applications with different requirements in metro optical networks, network slicing provides a virtual end-to-end resource connection with customized service provision. To improve the quality-of-service (QoS) of slices with long-term operation in networks, it is beneficial to reconfigure the slice adaptively, referring to the future traffic state. Considering the busy-hour Internet traffic with daily human mobility, the tidal pattern of traffic flow occurs in metro optical networks, expressing both temporal and spatial features. To achieve high QoS of slices, this paper proposes a node-oriented slice reconfiguration (NoSR) scheme to reduce the penalty of slices, where a gradient-based priority strategy is designed to reduce the penalties of slices overall penalties in reconfiguration. Besides, given that a precise traffic prediction model is essential for efficient slice reconfiguration with future traffic state, this paper presents the model combining the graph convolutional network (GCN) and gated recurrent unit (GRU) to extract the traffic features in space and time dimensions. Simulation results show that the presented GCN-GRU traffic prediction model achieves a high forecasting accuracy, and the proposed NoSR scheme efficiently reduces the penalty of slices to guarantee a high QoS in metro optical networks. | 10.1109/TNSM.2024.3453381 |
Yirui Wu, Hao Cao, Yong Lai, Liang Zhao, Xiaoheng Deng, Shaohua Wan | Edge Computing and Few-shot Learning Featured Intelligent Framework in Digital Twin empowered Mobile Networks | 2024 | Early Access | Digital twins Graph neural networks Edge computing Data privacy Cloud computing Training Iterative methods Digital Twin Mobile Network New Paradigm for DTMNs Edge Computing enabled DTMN Edge Intelligence Few-shot Learning Graph Neural Network | Digital twins (DT) and mobile networks have evolved forms of intelligence in Internet of Things (IoT). In this work, we consider a Digital Twin Mobile Network (DTMN) scenario with few multimedia samples. Facing challenges of knowledge extraction with few samples, stable interaction with dynamic changes of multimedia data, time and privacy saving in low-resource mobile network, we propose an edge computing and few-shot learning featured intelligent framework. Considering time-sensitive property of transmission and privacy risks of directly uploads in mobile network, we deploy edge computing to locally run networks for analysis, thus saving time to offload computing request and enhancing privacy by encrypting original data. Inspired by remarkable relationship representation of graphs, we build Graph Neural Network (GNN) in cloud to map physical mobile systems to virtual entities with DT, thus performing semantic inferences in cloud with few samples uploaded by edges. Occasionally, node features in GNN could converge to similar, non-discriminative embeddings, causing catastrophic unstable phenomena. An iterative reweight and drop structure (IRDS) is thus constructed in cloud, which nonetheless contributes stability with respect to edge uncertainty. As part of IRDS, a drop Edge&Node scheme is proposed to randomly remove certain nodes and edges, which not only enhances distinguished capability of graph neighbor patterns, but also offers data encryption with random strategy. We show one implementation case of image classification in social network, where experiments on public datasets show that our framework is effective with user-friendly advantages and significant intelligence. | 10.1109/TNSM.2024.3450993 |
Daniele Bringhenti, Fulvio Valenza | GreenShield: Optimizing Firewall Configuration for Sustainable Networks | 2024 | Early Access | Firewalls (computing) Security Power demand Optimization Sustainable development Network security Green products firewall network sustainability power consumption | Sustainability is an increasingly critical design feature for modern computer networks. However, green objectives related to energy savings are affected by the application of approximate cybersecurity management techniques. In particular, their impact is evident in distributed firewall configuration, where traditional manual approaches create redundant architectures, leading to avoidable power consumption. This issue has not been addressed by the approaches proposed in literature to automate firewall configuration so far, because their optimization is not focused on network sustainability. Therefore, this paper presents GreenShield as a possible solution that combines security and green-oriented optimization for firewall configuration. Specifically, GreenShield minimizes the power consumption related to firewalls activated in the network while ensuring that the security requested by the network administrator is guaranteed, and the one due to traffic processing by making firewalls to block undesired traffic as near as possible to the sources. The framework implementing GreenShield has undergone experimental tests to assess the provided optimization and its scalability performance. | 10.1109/TNSM.2024.3452150 |
Huanzhuo Wu, Jia He, Jiakang Weng, Giang T. Nguyen, Martin Reisslein, Frank H. P. Fitzek | OptCDU: Optimizing the Computing Data Unit Size for COIN | 2024 | Early Access | Artificial neural networks Neural networks Task analysis Servers Delays Symbols Payloads Computing in the network (COIN) Data Unit Delay In-network Computing Traffic amount | COmputing In the Network (COIN) has the potential to reduce the data traffic and thus the end-to-end latencies for data-rich services. Existing COIN studies have neglected the impact of the size of the data unit that the network nodes compute on. However, similar to the impact of the protocol data unit (packet) size in conventional store-and-forward packet-switching networks, the Computing Data Unit (CDU) size is an elementary parameter that strongly influences the COIN dynamics. We model the end-to-end service time consisting of the network transport delays (for data transmission and link propagation), the loading delays of the data into the computing units, and the computing delays in the network nodes. We derive the optimal CDU size that minimizes the end-to-end service time with gradient descent. We evaluate the impact of the CDU sizing on the amount of data transmitted over the network links and the end-to-end service time for computing the convolutional neural network (CNN) based Yoho and a Deep Neural Network (DNN) based Multi-Layer Perceptron (MLP). We distribute the Yoho and MLP neural modules over up to five network nodes. Our emulation evaluations indicate that COIN strongly reduces the amount of network traffic after the first few computing nodes. Also, the CDU size optimization has a strong impact on the end-to-end service time; whereby, CDU sizes that are too small or too large can double the service time. Our emulations validate that our gradient descent minimization correctly identifies the optimal CDU size. | 10.1109/TNSM.2024.3452485 |
Angela Sara Cacciapuoti, Jessica Illiano, Michele Viscardi, Marcello Caleffi | Multipartite Entanglement Distribution in the Quantum Internet: Knowing When to Stop! | 2024 | Early Access | Quantum entanglement Noise Qubit Quantum repeaters Noise measurement Markov decision processes Internet Entanglement Distribution Quantum Internet Quantum Communications Markov Decision Process | Multipartite entanglement distribution is a key functionality of the Quantum Internet. However, quantum entanglement is very fragile, easily degraded by decoherence, which strictly constraints the time horizon within the distribution has to be completed. This, coupled with the quantum noise irremediably impinging on the channels utilized for entanglement distribution, may imply the need to attempt the distribution process multiple times before the targeted network nodes successfully share the desired entangled state. And there is no guarantee that this is accomplished within the time horizon dictated by the coherence times. As a consequence, in noisy scenarios requiring multiple distribution attempts, it may be convenient to stop the distribution process early. In this paper, we take steps in the direction of knowing when to stop the entanglement distribution by developing a theoretical framework, able to capture the quantum noise effects. Specifically, we first prove that the entanglement distribution process can be modeled as a Markov decision process. Then, we prove that the optimal decision policy exhibits attractive features, which we exploit to reduce the computational complexity. The developed framework provides quantum network designers with flexible tools to optimally engineer the design parameters of the entanglement distribution process. | 10.1109/TNSM.2024.3452326 |
Sreenivasa Reddy Yeduri, Sindhusha Jeeru, Om Jee Pandey, Linga Reddy Cenkeramaddi | Energy-Efficient and Latency-Aware Data Routing in Small-World Internet of Drone Networks | 2024 | Early Access | Drones Delays Routing Energy efficiency Energy consumption Wireless sensor networks Wireless networks Internet of Drones small-world characteristics network latency energy consumption packet delivery ratio | Recently, drones have attracted considerable attention for sensing hostile areas. Multiple drones are deployed to communicate and coordinate sensing and data transfer in the Internet of Drones (IoD) network. Traditionally, multi-hop routing is employed for communication over long distances to increase the network’s lifetime. However, multi-hop routing over large-scale networks leads to energy imbalance and higher data latency. Motivated by this, in this paper, a novel framework of energy-efficient and latency-aware data routing is proposed for Small-World (SW)-IoD networks. We started with an optimization problem formulation in terms of network delay, energy consumption, and reliability. Then, the formulated mixed integer problem is solved by introducing the Small-World Characters (SWC) into the conventional IoD network to form the SW-IoD network. Here, the proposed framework introduces SWC by removing a few existing edges with the least edge weight from the traditional network and introducing the same number of long-range edges with the highest edge weight. We present the simulation results corresponding to packet delivery ratio, network lifetime, and network delay for the performance comparison of the proposed framework with state-of-the-art approaches such as the conventional SWC method, LEACH, Modified LEACH, Canonical Particle Multi-Swarm (PMS) method, and conventional shortest path routing algorithm. We also analyze the effect of the location of the ground control station, the velocity of the drones, and the different heights of layers on the performance of the proposed framework. Through experiments, the superiority of the proposed method is proven to be better when compared to other methods. Finally, the performance evaluation of the proposed model is tested on a network simulator (NS3). | 10.1109/TNSM.2024.3452414 |
Yepeng Ding, Junwei Yu, Shaowen Li, Hiroyuki Sato, Maro G. Machizawa | Data Aggregation Management With Self-Sovereign Identity in Decentralized Networks | 2024 | Early Access | Data aggregation Blockchains Soft sensors Security Data models Data privacy Distributed ledger Data aggregation self-sovereign identity blockchain decentralized network Internet of Things data security | Data aggregation management is paramount in data-driven distributed systems. Conventional solutions premised on centralized networks grapple with security challenges concerning authenticity, confidentiality, integrity, and privacy. Recently, distributed ledger technology has gained popularity for its decentralized nature to facilitate overcoming these challenges. Nevertheless, insufficient identity management introduces risks like impersonation and unauthorized access. In this paper, we propose Degator, a data aggregation management framework that leverages self-sovereign identity and functions in decentralized networks to address security concerns and mitigate identity-related risks. We formulate fully decentralized aggregation protocols for data persistence and acquisition in Degator. Degator is compatible with existing data persistence methods, and supports cost-effective data acquisition minimizing dependency on distributed ledgers. We also conduct a formal analysis to elucidate the mechanism of Degator to tackle current security challenges in conventional data aggregation management. Furthermore, we showcase the applicability of Degator through its application in the management of decentralized neuroscience data aggregation and demonstrate its scalability via performance evaluation. | 10.1109/TNSM.2024.3451995 |
Gunjan Kumar Saini, Gaurav Somani | Is There a DDoS?: System+Application Variable Monitoring to Ascertain the Attack Presence | 2024 | Early Access | Denial-of-service attack Computer crime Prevention and mitigation Monitoring Artificial neural networks Servers Accuracy Distributed Denial of Service (DDoS) Cybersecurity Protection Artificial Intelligence(AI) Machine Learning(ML) | The state of the art has numerous contributions which focus on combating the DDoS attacks. We argue that the mitigation methods are only useful if the victim service or the mitigation method can ascertain the presence of a DDoS attack. In many of the past solutions, the authors decide the presence of DDoS using quick and dirty checks. However, precise mechanisms are still needed so that the accurate decisions about DDoS mitigation can be made. In this work, we propose a method for detecting the presence of DDoS attacks using system variables available at the server or victim server operating system. To achieve this, we propose a machine learning based detection model in which there are three steps involved. In the first step, we monitored 14 different systems and application variables/ characteristics with and without a variety of DDoS attacks. In the second step, we trained machine learning model with monitored data of all the selected variables. In the final step, our approach uses the artificial neural network (ANN) and random forest (RF) based approaches to detect the presence of DDoS attacks. Our presence identification approach gives a detection accuracy of 88%-95% for massive attacks, 65%-77% for mixed traffic having a mixture of low-rate attack and benign requests, 58%-60% for flashcrowd, 76%-81% for mixed traffic having a mixture of massive attack and benign traffic and 58%-64% for low rate attacks with a detection time of 4-5 seconds. | 10.1109/TNSM.2024.3451613 |
Haftay Gebreslasie Abreha, Houcine Chougrani, Ilora Maity, Youssouf DRIF, Christos Politis, Symeon Chatzinotas | Fairness-Aware VNF Mapping and Scheduling in Satellite Edge Networks for Mission-Critical Applications | 2024 | Early Access | Dynamic scheduling Satellites Delays Topology Heuristic algorithms Processor scheduling Quality of service Satellite Edge Computing Software Defined Networking (SDN) Network Function Virtualization (NFV) Virtual Network Function (VNF) VNF Scheduling Fairness | Satellite Edge Computing (SEC) is seen as a promising solution for deploying network functions in orbit to provide ubiquitous services with low latency and bandwidth. Software Defined Networks (SDN) and Network Function Virtualization (NFV) enable SEC to manage and deploy services more flexibly. In this paper, we study a dynamic and topology-aware VNF mapping and scheduling strategy within an SDN/NFV-enabled SEC infrastructure. Our focus is on meeting the stringent requirements of mission-critical (MC) applications, recognizing their significance in both satellite-to-satellite and edge-to-satellite communications while ensuring service delay margin fairness across various time-sensitive service requests. We formulate the VNF mapping and scheduling problem as an Integer Nonlinear Programming problem (), with the objective of minimax fairness among specified requests while considering dynamic satellite network topology, traffic, and resource constraints. We then propose two algorithms for solving the problem: Fairness-Aware Greedy Algorithm for Dynamic VNF Mapping and Scheduling () and Fairness-Aware Simulated Annealing-Based Algorithm for Dynamic VNF Mapping and Scheduling () which are suitable for low and high service arrival rates, respectively. Our extensive simulations demonstrate that both and approaches are very close to the optimization-based solution and outperform the benchmark solution in terms of service acceptance rates. | 10.1109/TNSM.2024.3452031 |
Qianqian Wu, Qiang Liu, Wenliang Zhu, Zefan Wu | Energy Efficient UAV-Aassisted IoT Data Collection: A Graph-Based Deep Reinforcement Learning Approach | 2024 | Early Access | Autonomous aerial vehicles Data collection Task analysis Energy consumption Heuristic algorithms Energy efficiency Propulsion Data collection energy efficiency unmanned aerial vehicle (UAV) graph attention network deep reinforcement learning | With the advancements in technologies such as 5G, Unmanned Aerial Vehicles (UAVs) have exhibited their potential in various application scenarios, including wireless coverage, search operations, and disaster response. In this paper, we consider the utilization of a group of UAVs as aerial base stations (BS) to collect data from IoT sensor devices. The objective is to maximize the volume of collected data while simultaneously enhancing the geographical fairness among these points of interest, all within the constraints of limited energy resources. Therefore, we propose a deep reinforcement learning (DRL) method based on Graph Attention Networks (GAT), referred to as “GADRL”. GADRL utilizes graph convolutional neural networks to extract spatial correlations among multiple UAVs and makes decisions in a distributed manner under the guidance of DRL. Furthermore, we employ Long Short-Term Memory to establish memory units for storing and utilizing historical information. Numerical results demonstrate that GADRL consistently outperforms four baseline methods, validating its computational efficiency. | 10.1109/TNSM.2024.3450964 |
Madyan Alsenwi, Eva Lagunas, Symeon Chatzinotas | Distributed Learning Framework For eMBB-URLLC Multiplexing in Open Radio Access Networks | 2024 | Early Access | Ultra reliable low latency communication Open RAN Reliability Servers Real-time systems Resource management Quality of service Network slicing distributed learning 5G NR O-RAN URLLC eMBB DRL | Next-generation (NextG) cellular networks are expected to evolve towards virtualization and openness, incorporating reprogrammable components that facilitate intelligence and real-time analytics. This paper builds on these innovations to address the network slicing problem in multi-cell open radio access wireless networks, focusing on two key services: enhanced Mobile BroadBand (eMBB) and Ultra-Reliable Low Latency Communications (URLLC). A stochastic resource allocation problem is formulated with the goal of balancing the average eMBB data rate and its variance, while ensuring URLLC constraints. A distributed learning framework based on the Deep Reinforcement Learning (DRL) technique is developed following the Open Radio Access Networks (O-RAN) architectures to solve the formulated optimization problem. The proposed learning approach enables training a global machine learning model at a central cloud server and sharing it with edge servers for executions. Specifically, deep learning agents are distributed at network edge servers and embedded within the Near-Real-Time Radio access network Intelligent Controller (Near-RT RIC) to collect network information and perform online executions. A global deep learning model is trained by a central training engine embedded within the Non-Real-Time RIC (Non-RT RIC) at the central server using received data from edge servers. The performed simulation results validate the efficacy of the proposed algorithm in achieving URLLC constraints while maintaining the eMBB Quality of Service (QoS). | 10.1109/TNSM.2024.3451295 |
Marcos Carvalho, Daniel Soares, Daniel F. Macedo | QoE Estimation Across Different Cloud Gaming Services Using Transfer Learning | 2024 | Early Access | Quality of experience Cloud gaming Transfer learning Data models Quality of service Task analysis Context modeling Cloud Gaming Mobile Cloud Gaming Domain Adaptation Transfer Learning QoE Estimation Machine Learning | Cloud Gaming (CG) has become one of the most important cloud-based services in recent years by providing games to different end-network devices, such as personal computers (wired network) and smartphones/tablets (mobile network). CG services stand challenging for network operators since this service demands rigorous network Quality of Services (QoS). Nevertheless, ensuring proper Quality of Experience (QoE) keeps the end-users engaged in the CG services. However, several factors influence users’ experience, such as context (i.e., game type/players) and the end-network type (wired/mobile). In this case, Machine Learning (ML) models have achieved the state-of-the-art on the end-users’ QoE estimation. Despite that, traditional ML models demand a larger amount of data and assume that the training and test have the same distribution, which can make the ML models hard to generalize to other scenarios from what was trained. This work employs Transfer Learning (TL) techniques to create QoE estimation over different cloud gaming services (wired/mobile) and contexts (game type/players). We improved our previous work by performing a subjective QoE assessment with real users playing new games on a mobile cloud gaming testbed. Results show that transfer learning can decrease the average MSE error by at least 34.7% compared to the source model (wired) performance on the mobile cloud gaming and to 81.5% compared with the model trained from scratch. | 10.1109/TNSM.2024.3451300 |