Last updated: 2023-11-28 18:09 UTC
All documents
Number of pages: 105
Author(s) | Title | Year | Issue | Keywords | ||
---|---|---|---|---|---|---|
Yuqing Ding, Zhongcheng Wu, Yongchun Miao, Liyang Xie, Manyu Ding | Genuine On-Chain and Off-Chain Collaboration: Achieving Secure and Non-Repudiable File Sharing in Blockchain Applications | 2023 | Early Access | Blockchains Collaboration Smart contracts Encryption Stakeholders Behavioral sciences Non-repudiation Non-repudiation Blockchain File-sharing On-chain and Off-chain collaboration | Blockchain’s immutable and traceable records and independence from third-party involvement make it an irreplaceable tool in applications involving multiple stakeholders. However, securely sharing off-chain files among stakeholders while ensuring non-repudiation is a significant challenge. This is because blockchain cannot monitor off-chain behavior, and stakeholders may refuse to acknowledge records on the chain. In this study, we propose an efficient solution for secure file sharing among stakeholders through on-chain and off-chain collaboration for blockchain applications with additional off-chain storage modules. Specifically, we design an adapted blockchain structure and propose a consensus process integrated with the sharing process to manage off-chain behavior and prevent delivery repudiation. We also incorporate a ciphertext policy into the sharing process to ensure the integrity and confidentiality of the shared file. Additionally, we propose a watermarking protocol in conjunction with blockchain records to hold unauthorized disclosure behavior accountable. Our scheme extends the management scope of blockchain to off-chain and achieves 32x, 19x, and 1.48x higher throughput than Bitcoin, Ethereum, and Fabric, respectively. | 10.1109/TNSM.2023.3336062 |
Sayantini Majumdar, Susanna Schwarzmann, Riccardo Trivisonno, Georg Carle | Towards Massive Distribution of Intelligence for 6G Network Management using Double Deep Q-Networks | 2023 | Early Access | 6G mobile communication Convergence Training Scalability 5G mobile communication Benchmark testing Behavioral sciences 6G network management network automation Reinforcement Learning Machine Learning distributed intelligence model training stability scalability | In future 6G networks, the deployment of network elements is expected to be highly distributed, going beyond the level of distribution of existing 5G deployments. To fully exploit the benefits of such a distributed architecture, there needs to be a paradigm shift from centralized to distributed management. To enable distributed management, Reinforcement Learning (RL) is a promising choice, due to its ability to learn dynamic changes in environments and to deal with complex problems. However, the deployment of highly distributed RL – termed massive distribution of intelligence – still faces a few unsolved challenges. Existing RL solutions, based on Q-Learning (QL) and Deep Q-Network (DQN) do not scale with the number of agents. Therefore, current limitations, i.e., convergence, system performance and training stability, need to be addressed, to facilitate a practical deployment of massive distribution. To this end, we propose improved Double Deep Q-Network (IDDQN), addressing the long-term stability of the agents’ training behavior. We evaluate the effectiveness of IDDQN for a beyond 5G/6G use case: auto-scaling virtual resources in a network slice. Simulation results show that IDDQN improves the training stability over DQN and converges at least 2 times sooner than QL. In terms of the number of users served by a slice, IDDQN shows good performance and only deviates on average 8% from the optimal solution. Further, IDDQN is robust and resource-efficient after convergence. We argue that IDDQN is a better alternative than QL and DQN, and holds immense potential for efficiently managing 6G networks. | 10.1109/TNSM.2023.3333875 |
Paulo Sena, Antonio Abelem, György Dán, Daniel Sadoc Menasché | Management of Caching Policies and Redundancy over Unreliable Channels | 2023 | Early Access | Aging Synchronization Robustness Servers Backhaul networks Time factors Resource management caching networking wireless aging | Caching plays a central role in networked systems, reducing the load on servers and the delay experienced by users. Despite their relevance, networked caching systems still pose a number of challenges pertaining their long term behavior. In this paper, we formally show and experimentally evidence conditions under which networked caches tend to synchronize over time. Such synchronization, in turn, leads to performance degradation and aging, motivating the monitoring of caching systems for eventual rejuvenation, as well as the deployment of diverse cache replacement policies across caches to promote diversity and preclude synchronization and its aging effects. Based on trace-driven simulations with real workloads, we show how hit probability is sensitive to varying channel reliability, cache sizes, and cache separation, indicating that the mix of simple policies, such as Least Recently Used (LRU) and Least Frequently Used (LFU), provide competitive performance against state-of-art policies. Indeed, our results suggest that diversity in cache replacement policies, rejuvenation and intentional dropping of requests are strategies that build diversity across caches, preventing or mitigating performance degradation due to caching aging. | 10.1109/TNSM.2023.3334559 |
Nan Wei, Lihua Yin, Jingyi Tan, Chuhong Ruan, Chuang Yin, Zhe Sun, Xi Luo | An Autoencoder-Based Hybrid Detection Model for Intrusion Detection With Small-Sample Problem | 2023 | Early Access | Internet of Things Feature extraction Intrusion detection Neural networks Data models Telecommunication traffic Encoding Malicious traffic detection feature enhancement IOT-23 network intrusion detection system small-sample dataset | Cyber-attacks have become more frequent, targeted, and complex as the exponential growth in computer networks and the development of Internet of Things (IoT). Network intrusion detection system (NIDS) is an important and essential tool to protect network environments. However, the low performance of a NIDS against small malicious samples has seriously threatened the security of networks, thus directly leading to the loss of personal property and national interests. Given this, we propose an auto encoder-based hybrid detection model, abbreviated as AHDM, for the intrusion detection with small-sample problem. AHDM has a dual classifier framework. It trains first neural network based on the encoding features obtained from the autoencoder feature enhancement algorithm to detect small-sample malicious traffic. It trains second neural network using the original features to detect normal traffic and large-sample malicious traffic. The final detection result of malicious traffic is obtained by combining the detection results of the two neural networks. In experiments, we use three classic datasets (KDD CUP 99, CIC-IDS-2017, and IOT-23) and simulate the malicious traffic detection targeting extremely small-sample malicious traffic. The results show that AHDM has a higher detection rate for small-sample malicious traffic compared to the advanced detection models (DNN and ACID). In the IOT-23 dataset, the AHDM model shows an absolute advantage in detecting DDoS type of malicious traffic, with a detection rate of 0.71, which is much higher than the DNN (0.14) and ACID (0.14) models. | 10.1109/TNSM.2023.3334028 |
Chunlin Li, Yongzheng Gan, Yong Zhang, Youlong Luo | A Cooperative Computation Offloading Strategy With On-Demand Deployment of Multi-UAVs in UAV-Aided Mobile Edge Computing | 2023 | Early Access | Autonomous aerial vehicles Task analysis Energy consumption Trajectory Servers Computational efficiency Three-dimensional displays Mobile edge computing (MEC) unmanned aerial vehicles (UAVs) computation offloading on-demand deployment | In this paper, we plan to use ground-based stations in mobile edge computing (MEC) and unmanned aerial vehicles (UAVs) to provide communication and computation offloading services in disaster areas. However, optimizing the initial number and three-dimensional position of deployed UAVs is a prerequisite for providing computing services to users. Additionally, due to the limited battery and computing power of UAVs, it is a major challenge to rationally design the UAV trajectory during the computational offloading period to ensure communication quality for mobile users and reduce the energy consumption for completing tasks. Thus, we propose a cooperative computation offloading strategy with on-demand deployment of multi-UAV in UAV-aided MEC. The strategy utilizes the predicted user trajectory for UAV deployment on the premise of the minimum path loss of users. Then, to minimize total energy consumption for completing tasks, a joint optimization problem comprising user association strategy, computing resource allocation strategy, and UAV trajectory is proposed, which is a mixed-integer nonlinear program (MINLP). Therefore, to find the suboptimal solution, we use the block coordinate descent method to solve the problem. Numerical results show that the proposed algorithm can efficiently reduce the path loss by up to 18.55% and the total energy consumption by 18.28% compared to the benchmarks. | 10.1109/TNSM.2023.3332899 |
Abhishek Hazra, Mainak Adhikari, Dipak Kumar, Tarachand Amgoth | Fair Scheduling and Computation Co-Offloading Strategy for Industrial Applications in Fog Networks | 2023 | Early Access | Task analysis Industrial Internet of Things Job shop scheduling Processor scheduling Delays Servers Optimization Industrial Applications Internet of Things Fog Computing End-to-End Delay Lyapunov Optimization | Nowadays, by integrating the Industrial Internet of Things (IIoT) with fog networks, companies can efficiently manage the increasing data traffic and enhance the capabilities of sensing devices. However, control of critical IIoT applications has become difficult because of the increasing demand for technology during the Industry 4.0 revolution and the use of fog computing. To address this issue, we introduce an efficient resource provisioning strategy called Fair Scheduling and Computation Co-offloading (FSCC) for executing the maximum number of tasks within the corresponding deadline while achieving network stability. Initially, we formulate the task scheduling problem as a stochastic problem and devise a novel optimization framework by exploiting the Lyapunov optimization technique. A two-phase task offloading strategy is also proposed to efficiently offload scheduled tasks to suitable computing devices in fog networks. The proposed FSCC strategy combines devices’ current state information and a collaborative fog-cloud infrastructure for controlling network parameters and utilizing available fog resources. Experimental results demonstrate that the proposed strategy improves 15-20% end-to-end delay and deadline satisfaction over existing methods. | 10.1109/TNSM.2023.3332763 |
Chenjing Tian, Haotong Cao, Jun Xie, Sahil Garg, Mubarak Alrashoud, Prayag Tiwari | Community Detection-Empowered Self-adaptive Network Slicing in Multi-Tier Edge-Cloud System | 2023 | Early Access | Quality of service Task analysis Cloud computing Load modeling Network slicing Costs 5G mobile communication Self-adaptive network slicing service function chaining task offloading multi-tier network system community detection | Network slicing (NS) is a highly promising paradigm in 5G and forthcoming 6G communication networks. NS allows for the customization of multiple logically independent network slices to provide tailored service for vertical applications with diverse quality of service (QoS) requirements. However, current research on NS primarily relies on the traditional modeling methods such as service function chaining (SFC) and task offloading, which have limitations in adapting to the evolving scenarios in 5G/6G networks. To address this, our study introduces one novel Self-adaptive Network Slicing (SNS) modeling method. In this approach, each service is abstracted as multiple SFC replicas originating from diverse access points. Based on the SNS modeling, we investigate a VNF configuration and flow routing (VCFR) problem for service provisioning in a multi-tier system. With the objective of achieving load-balancing with minimal slice operational expenditure, we formulate the VCFR as a mixed-integer linear programming. However, deriving an exact solution via MILP is computationally expensive due to its NP-hardness. To reduce computational complexity, we propose one Load Balancing-considered Community Detection-based Heuristic (LBCD-Heu), our divide and conquer approach, to solve the problem. In LBCD-Heu, we first design a load balancing-considered community detection method to divide the substrate multi-tier network into multiple independent communities. Following this, the MILP is employed in each community to obtain a near-optimal solution. Extensive evaluations justify that LBCD-Heu can effectively reduce the service operational cost and algorithm run-time while ensuring the load balancing of substrate network. Additionally, our results verify that the SNS modeling enables the provision of services at lower expenditures compared with traditional modeling methods. | 10.1109/TNSM.2023.3332509 |
Nidhi Sharma, Krishan Kumar | Evolutionary Multi-Objective Optimization Algorithm for Resource Allocation using Deep Neural Network in Ultra-Dense Networks | 2023 | Early Access | Resource management Optimization Evolutionary computation Statistics Sociology Convergence Sorting Ultra-dense network multi-objective optimization NSGA-II deep learning resource allocation imperfect CSI | It is certain that in the modern era the ultra-dense network (UDN) structure will play a major role for the evolution of 5G and beyond wireless communication system, particularly for blind wireless area and hotspot. In resource constraint environment, obtaining higher energy efficiency (EE), spectrum efficiency (SE), and greater fairness during resource allocation process are conflicting objectives. To obtain the balance among them a multi-objective optimization problem (MOOP) is designed and an enhanced version of non-dominated sorting genetic algorithm II (NSGA-II), which integrates the advantage of evolutionary method and machine learning framework is suggested. Firstly, the chromosome coding scheme is designed which is suitable for spectrum allocation. Afterward, a deep learning framework is designed to enable the self-tuning of crossover and mutation operators to improve the diversity of candidate solutions. Further, an elitist retention strategy is modified by designing variable fraction scheme. This intelligent approach enables micro-cell users to improve their downlink performance of SE, EE, and fairness by assigning resource blocks. The simulated results yield the effectiveness of the proposed scheme in perfect and imperfect channel state information (CSI) environment by analysing the obtained performance gains when compared with other existing allocation methods in terms of EE, SE, and fairness. | 10.1109/TNSM.2023.3332356 |
Lei Du, Zhaoquan Gu, Ye Wang, Le Wang, Yan Jia | A Few-Shot Class-Incremental Learning Method for Network Intrusion Detection | 2023 | Early Access | Feature extraction Network intrusion detection Power capacitors Telecommunication traffic Training Task analysis Prototypes Cyber Security Network Intrusion Detection Few-Shot Class-Incremental Learning | With the rapid development of information technologies, the security of cyberspace has become increasingly serious. Network intrusion detection is a practical scheme to protect network systems from cyber attacks. However, as new vulnerabilities and unknown attack types are constantly emerging, only a few samples of such attacks can be captured for analysis, which cannot be handled by the existing detection methods deployed in real systems. To handle this problem, we propose a few-shot class-incremental learning method called Branch Fusion Strategy based Network Intrusion Detection (BFS-NID for short), which can continuously learn new attack classes with only a few samples. BFS-NID includes a feature extractor module and a branch classifier learning module. The feature extractor module uses a vision transformer to learn better feature representations in a self-supervised manner, and the parameters of the feature extractor are fixed to avoid catastrophic forgetting when the model learns incrementally. The branch classifier learning module sets re-projection for different branch sessions to enhance the feature representation ability between classes and employs a branch fusion strategy to associate the context of learned attack classes with new classes in different sessions. We conducted extensive experiments on two popular network intrusion detection benchmark datasets (CIC-IDS2017 and CSE-CIC-IDS2018) and the results demonstrate that BFS-NID surpasses the baselines and achieves the best performance. | 10.1109/TNSM.2023.3332284 |
Engin Zeydan, Suayb S. Arslan, Yekta Turk | Exploring Blockchain Architectures for Network Sharing: Advantages, Limitations, and Suitability | 2023 | Early Access | Blockchains Costs Distributed ledger Quality of service Computer architecture Investment Interoperability blockchain network sharing mobile operators | The increasing demand for mobile data services has led to a need for efficient and cost-effective network sharing solutions. Blockchain technology has emerged as a promising solution for addressing the challenges associated with network sharing, such as interoperability, trust, and accountability. This paper presents a comprehensive classification and categorization of blockchain-based network sharing scenarios, highlighting their advantages and limitations. We have identified seven network sharing scenarios, ranging from centralized network sharing to fully decentralized spectrum sharing. For each scenario, the suitability of some of the selected blockchain architectures, from public, private, sidechain, and hybrid, is evaluated through extensive evaluations. We also identify gaps and opportunities of blockchain-based network sharing solution and present future research directions at the end of paper. Our analysis and results reveal that a single blockchain architecture is not suitable for all network sharing scenarios but careful analysis should be performed when selecting the suitable blockchain network in network sharing. | 10.1109/TNSM.2023.3331307 |
Mahmoud Wafik Eltokhey, Mohammad Ali Khalighi, Zabih Ghassemlooy, Volker Jungnickel | Handover-Aware Scheduling for Small-and Large-Scale VLC Networks | 2023 | Early Access | Handover Resource management Radio frequency Lighting Uplink Terminology Light emitting diodes Visible-light communications multi-cell networks inter-cell interference soft handover scheduling | This paper proposes handover-aware scheduling solutions for multi-cell small-and large-scale visible-light communication networks, enabling soft handover. For this, we coordinate the transmissions at the access points (APs), serving the users in different time slots based on their locations with respect to the AP coverage areas for an efficient utilization of the resources. For scenarios where coverage in large areas is needed, we additionally propose clustering solutions to decrease the handover rate. Compared with non-coordinated schemes, the proposed soft handover techniques offers improved performance in terms of user achievable throughput and link reliability. | 10.1109/TNSM.2023.3328927 |
Feng He, Jiarong Liang, Qingnian Li | A Novel Approximation for Minimum Fault-tolerant Virtual Backbones Problem in Heterogeneous Wireless Sensor Networks With Faulty Nodes | 2023 | Early Access | Wireless sensor networks Approximation algorithms Fault tolerant systems Sensors Routing Distributed algorithms Redundancy Connected dominating set distributed algorithm wireless sensor network approximation algorithm disk graph with bidirectional links | Frequently, unit disk graphs (UDGs) are used to model homogeneous wireless sensor networks (WSNs), in which each node has the same transmission radius. In some applications, however, different nodes in a WSN have different transmission radii, meaning that a UDG cannot accurately model the WSN. In this case, a disk graph with bidirectional links (DGB) can be used in place of a UDG. Nevertheless, most results reported to date concern the problem of finding minimum fault-tolerant CDSs in UDGs. In this paper, we investigate the minimum fault-tolerant CDS problem for DGBs by reconstructing CDSs for DGBs with faulty nodes. We present a centralized approximation algorithm for CDS reconstruction to address the minimum fault-tolerant CDS problem in given DGBs. The performance ratio (PR) of the presented algorithm is the same as that of the algorithm used to generate the input CDS C. Furthermore, we present a distributed version, which not only can be easily implemented in real situations but also considers CDS size to reduce the network cost. Theoretical analysis shows that the PR of our proposed algorithm is lower than those of other state-of-the-art algorithms for the minimum fault-tolerant CDS problem in given DGBs. In addition, numerical experiments objectively demonstrate that the performance of our algorithm is superior on average to that of its competitors in terms of CDS size, run time and application rate. | 10.1109/TNSM.2023.3332144 |
Alisson Medeiros, Antonio Di Maio, Torsten Braun, Augusto Neto | TENET: Adaptive Service Chain Orchestrator for MEC-Enabled Low-Latency 6DoF Virtual Reality | 2023 | Early Access | Streaming media Resists Servers Quality of service Low latency communication Energy consumption Computational modeling Mobile Virtual Reality End-to-end Latency Six Degrees of Freedom Videos Multi-access Edge Computing Service Function Chaining Service Offloading Service Migration and Quality of Service | The next generation of Virtual Reality (VR) applications is expected to provide advanced experiences through Six Degrees of Freedom (6DoF) content, which requires higher data rates and ultra-low latency. In this article, we refactor 6DoF VR applications into atomic services to increase the computing capacity of VR systems aiming to reduce the end-to-end (E2E) of 6DoF VR applications. Those services are chained and deployed across Head-Mounted Displays (HMDs) and Multi-access Edge Computing (MEC) servers in high mobility scenarios over realedge network topologies. We investigate the Distributed Service Chain Problem (DSCP) to find the optimal service placement of services from a service chain such that its E2E latency does not exceed 5 ms. The DSCP problem is NP-hard. We provide an integer linear program to model the system, along with a heuristic, namely disTributed sErvice chaiN orchEstraTor (TENET), which is one order of magnitude faster than optimally solving the DSCP problem. We compare TENET to DSCP implementation and well-known service migration algorithms in terms of E2E latency, power consumption, video resolution selection based on E2E latency, context migrations, and execution time. We observe a significant reduction of E2E latency and gains in more advanced video resolution selection and accepted context service migrations when using TENET’s deployment strategy on VR services. | 10.1109/TNSM.2023.3331755 |
Antonio Calagna, Yenchia Yu, Paolo Giaccone, Carla Fabiana Chiasserini | Design, Modeling, and Implementation of Robust Migration of Stateful Edge Microservices | 2023 | Early Access | Containers Microservice architectures Iterative methods Quality of experience Image restoration Sockets Network function virtualization Migration Network Function Virtualization Microservices Experimental analysis Modeling | Stateful migration has emerged as the key solution to support latency-sensitive microservices at the edge while ensuring a satisfying experience for mobile users. In this paper, we address two relevant issues affecting stateful migration, namely, the migration of containerized microservices and that of the associated data connection. We do so by first introducing a novel network solution, based on OvS, that permits to preserve the established connection with mobile end users upon migrating a microservice. Then, using Podman and CRIU, we experimentally characterize the fundamental migration KPIs, i.e., migration duration and microservice downtime, and we devise an analytical model that, accounting for all the relevant real-world aspects of stateful migration, provides an accurate upper bound on such KPIs. We validate our model using real-world microservices, namely, MQTT Broker and Memcached, and show that it can predict KPIs values with an error that is up to 99.7% smaller than that yielded by the state of the art. Finally, we consider a UAV controller as relevant microservice use case and demonstrate how our model can be exploited to effectively configure the system parameters so that the required QoE level is met. | 10.1109/TNSM.2023.3331750 |
Hao Feng, Tianqin Zhou, Yuhui Deng, Laurence T. Yang | A Holistic Energy-aware and Probabilistic Determined VMP Strategy for Heterogeneous Data Centers | 2023 | Early Access | Servers Energy consumption Data centers Cooling Virtual machining Temperature distribution Heating systems heterogeneous datacenter virtual machine placement thermal awareness energy consumption | The expansion of data centers, driven by the continuous development of network services, has led to a significant issue of high energy consumption. Due to the real-time interaction between IT and non-IT equipments, it is difficult to consider the holistic energy consumption of heterogeneous data centers. Therefore, this paper proposes a holistic-energy-aware-virtual machine placement (VMP) strategy for data centers that use heterogeneous resources to provide services. Firstly, we propose the energy-aware VMP strategy by using the probabilistically determining mechanism to effectively minimize the number of activated servers and improve server resource utilization. Secondly, within this strategy, we leverage dynamic voltage and frequency scaling (DVFS) technology, enabling nodes to operate at lower frequencies and voltages while meeting performance requirements, thus further reducing computing energy consumption. Thirdly, in addition to the previous two points, the probabilistic determined genetic algorithm (PDGA) is proposed to reasonably distribute the workloads based on the heat-recirculation effect and reduces the cooling energy consumption. The above mechanisms collectively optimize the global energy consumption of heterogeneous data centers. Experimental results demonstrate that the PDGA can significantly reduce the energy consumption of IT and non-IT equipment. The total energy consumption of the data center is significantly reduced (the PDGA is 20.83% lower than the simulated annealing based algorithm and 20.76% lower than the big data task scheduling algorithm based on thermal-aware and DVFS-enabled techniques). | 10.1109/TNSM.2023.3330413 |
Pablo Salva-Garcia, Ruben Ricart-Sanchez, Jose M. Alcaraz-Calero, Qi Wang, Octavio Herrera-Ruiz | An eBPF-XDP Hardware-Based Network Slicing Architecture for Future 6G Front-to Back-Haul Networks | 2023 | Early Access | 6G mobile communication Network slicing Kernel Quality of service Hardware Telecommunication traffic Next generation networking 6G Programmable Data Plane eBPF XDP Network Slicing | The heterogeneous requirements imposed by different vertical businesses have motivated a networking paradigm shift in the next generation of mobile networks (beyond 5G and 6G), leading to critical operation competitiveness of improved productivity, performance and efficiency. Furthermore, with the global digital revolution, such as Industry 4.0, and a connected world, network virtualisation together with high reliability and high performance communications have become crucial elements for mobile network operators. To minimise the negative effects that could affect critical services, network slicing is widely recognised as a key technology with the objective of meeting the Service-Level Agreements (SLAs) and Key Performance Indicators (KPIs) in future 6G networks. In this context, it is essential to introduce a programmable data plane able to enforce flexible Quality of Service (QoS) commitments, while providing high-performance packet processing and real-time monitoring capabilities. To this end, this paper is focused on designing, prototyping and evaluating a novel framework that leverages a set of hardware-based technologies including eXpress Data Path (XDP), extended Barkeley Packet Filter (eBPF) and Smart Network Interface Cards (SmartNICs) to offload network functionality with the objective of providing high-performance pre-6G front-, mid-and back-haul network communications and thus, decreasing the overhead incurs by the Linux Kernel. The proposed solution is implemented based on bypassing the Linux Kernel and accelerating the communication, while providing network slice control and real-time monitoring capabilities. The main aim of this framework is to ensure network communications in forthcoming 6G infrastructures by guaranteeing 6G KPIs and avoiding system overload. The empirical validation of this solution for Industry 4.0 services as an example use case demonstrates key performance improvements in terms of packet processing as high as about 25Gbps, 20M packet per second, 0% packet loss, 0.1ms of latency and less than 10% load on the CPUs. | 10.1109/TNSM.2023.3329942 |
Yilun Liu, Shimin Tao, Weibin Meng, Jingyu Wang, Hao Yang, Yanfei Jiang | Multi-Source Log Parsing With Pre-Trained Domain Classifier | 2023 | Early Access | Semantics Classification algorithms Training Manuals Task analysis Maintenance engineering Labeling multi-source log analysis log parsing domain classification transfer learning deep learning | Automated log analysis with AI technologies is commonly used in network, system, and service operation and maintenance to ensure reliability and quality assurance. Log parsing serves as an essential primary stage in log analysis, where unstructured logs are transformed into structured data to facilitate subsequent downstream analysis. However, traditional log parsing algorithms designed for single-domain processing struggle to handle the challenges posed by multi-source log inputs, leading to a decline in parsing accuracy. Adapting these algorithms to multi-source logs often requires extensive manual labeling efforts. To address this, we propose Domain-aware Parser (DA-Parser), a framework that includes a domain classifier to identify the source domains of multi-source logs. This enables the conversion of the multi-source log parsing problem into a series of single-source parsing problems. The classifier is pre-trained on a corpus of logs from 16 domains, eliminating the need for additional human labeling. The predicted source domain tags serve as constraints, limiting the template extraction process to logs from the same domain. Empirical evaluation on a multi-domain dataset demonstrates that DA-Parser outperforms the existing SOTA algorithm by 21.6% in terms of parsing accuracy. The proposed approach also shows potential efficiency improvements, requiring only 6.67% of the time consumed by existing parsers, while maintaining robustness against minor domain classification errors. | 10.1109/TNSM.2023.3329144 |
Luca Gioacchini, Marco Mellia, Luca Vassio, Idilio Drago, Giulia Milan, Zied Ben Houidi, Dario Rossi | Cross-network Embeddings Transfer for Traffic Analysis | 2023 | Early Access | Task analysis Knowledge engineering Artificial intelligence Adaptation models Pipelines Telecommunication traffic Transfer learning Darknets network monitoring transfer learning representation learning domain adaptation | Artificial Intelligence (AI) approaches have emerged as powerful tools to improve traffic analysis for network monitoring and management. However, the lack of large labeled datasets and the ever-changing networking scenarios make a fundamental difference compared to other domains where AI is thriving. We believe the ability to transfer the specific knowledge acquired in one network (or dataset) to a different network (or dataset) would be fundamental to speed up the adoption of AI-based solutions for traffic analysis and other networking applications (e.g., cybersecurity). We here propose and evaluate different options to transfer the knowledge built from a provider network, owning data and labels, to a customer network that desires to label its traffic but lacks labels. We formulate this problem as a domain adaptation problem that we solve with embedding alignment techniques and canonical transfer learning approaches. We present a thorough experimental analysis to assess the performance considering both supervised (e.g., classification) and unsupervised (e.g., novelty detection) downstream tasks related to darknet and honeypot traffic. Our experiments show the proper transfer techniques to use the models obtained from a network in a different network. We believe our contribution opens new opportunities and business models where network providers can successfully share their knowledge and AI models with customers. | 10.1109/TNSM.2023.3329442 |
Long Qu, Lingjie Yu, Peng Yu, Maurice Khabbaz | Latency-Sensitive Parallel Multi-Path Service Flow Routing with Segmented VNF Processing in NFV-Enabled Networks | 2023 | Early Access | Delays Routing Bandwidth Task analysis Parallel processing Virtualization Servers NFV SFC scheduling multi-path routing processing window MILP | In the context of Software Defined Networking (SDN) scenarios, the deployment of multi-path routing has been trending as one of the practical approaches. It serves the two-fold objective of improving the reliability of Service Function Chains (SFCs) and reducing end-to-end delays through parallel processing; this latter being this paper’s focal point given it is one of the fundamental objectives of 6G. The literature encloses numerous publications revolving around the exploitation of Virtual Network Function (VNF) duplication and optimal placement to enable parallel processing. However, very little attention has been allocated to segmented VNFs with parallel multi-path data traffic flow routing to catalyze service completion. In reality, the application of segmented task processing is now widely used in our internet life (e,g, real-time video on Youtube). In order to realize the ultra-low end-to-end delay of SFC, we introduce the segmented VNF processing window and implement VNF processing tasks in batches/windows with multi-path routing. Herein, a novel Parallel Multi-Path service flow Routing with processing Windows (PMPRW) scheme is proposed. The PMPRW is formulated as a Mixed Integer Linear Program (MILP), owing to the complexity of which, a Column-Generation (CG) based framework is developed to generate accurate sub-optimal solutions that achieve the same performance as the optimal solution. In order to accelerate the process and enhance the performance, we propose an extended Column Fixing (CF) strategy to help generate new columns in CG. Extensive simulations are conducted to gauge the merit of PMPRW and demonstrate its superiority (as opposed to single-path routing). PMPRW achieves desirable performance by concurrently reducing the overall end-to-end delay (e.g., 22% through parallel dual-path routing). | 10.1109/TNSM.2023.3328644 |
Dimitrios J. Vergados, Angelos Michalas, Alexandros-Apostolos A. Boulogeorgos, Spyridon Nikolaou, Nikolaos Asimopoulos, Dimitrios D. Vergados | Adaptive Virtual Reality Streaming: A Case for TCP | 2023 | Early Access | Wireless communication Streaming media Delays Servers Throughput Protocols Fuzzy logic Virtual Reality Adaptive Streaming TCP Fuzzy logic | Virtual reality (VR) is one of the applications with the most strict requirements in the performance of next generation networks, since it requires both high throughput, low delay, and packet loss. As the performance of networks, and the level of congestion varies over time, a need for adaptation in the stream’s data rate, in order to maintain reasonable packet loss, while using the available bandwidth, emerges. Motivated by this, in this contribution, we present an adaptation algorithm for VR applications, that exploits fuzzy logic, and transmission control protocol (TCP) transport in order to maintain the optimal data rate of the VR stream. In this direction, we perform a performance assessment of VR networks in the network simulator 3, that reveals that adaptation of the data rate is indeed necessary to provide the best possible VR data at the client. Moreover, it becomes evident that TCP transport, in combination with a data rate adaptation algorithm significantly reduces both the packet loss and the network delay, while maintaining high throughput. Finally, the proposed fuzzy algorithm outperforms well known adaptation algorithms in terms of throughput, delay and fairness. | 10.1109/TNSM.2023.3328770 |