Last updated: 2024-02-22 04:01 UTC
Number of pages: 109
|Weiqi Liu, Mohammad Arif Hossain, Nirwan Ansari, Abbas Kiani, Tony Saboorian
|Reinforcement Learning-Based Network Slicing Scheme for Optimized UE-QoS in Future Networks
|Quality of service 5G mobile communication Resource management Artificial neural networks Ultra reliable low latency communication Network slicing Computational modeling core network network slicing communication HetNet
|An end-to-end (E2E) network slicing (NS) scheme for heterogeneous network (HetNet) is proposed in which the number of slices and instances of various network functions (NFs) are optimized contingent on the number of users (UEs) and their quality of service (QoS) requirements. The objective of the scheme is to empower future generation networks by considering control signaling in the control plane as well as the UE traffic in the user plane of the core network (CN). We formulate a joinT UE assOciation, wiReless bandwidth allocation, sliCe formation, slice assignment, virtual network function (VNF) placement, computing resource allocation, link assignment, and link bandwidtH allocation (TORCH) problem to minimize the E2E task completion time of all UEs while considering both control signaling and UEs’ traffic. Since TORCH is a mixed-integer nonlinear problem, to tackle the problem, we decompose it into two sub-problems: the link assignment problem and the UE Association, resource allocation, Slice formation, Slice AssIgnment, and VNF pLacement (ASSAIL) problem. The ASSAIL problem comprises both the core network (CN) and radio access network (RAN), and they do not compete for resources, so we decompose it into two sub-problems: the RAN problem and the CN problem. We use Dijkstra’s algorithm and a deep Q-learning network (DQN) based reinforcement learning method to iteratively solve the two sub-problems. Simulation results have confirmed the effectiveness of our proposed scheme in tackling the TORCH problem.
|Anirban Lekharu, Annanya Pratap Singh Chauhan, Arijit Sur, Moumita Patra
|Reinforcement Learning Based Adaptive BitRate Caching at MEC Server
|Streaming media Servers Bit rate Quality of experience Backhaul networks Bandwidth Adaptive systems Mobile Edge Computing Adaptive Bitrate Streaming Reinforcement Learning Content Caching Content Popularity
|Mobile Edge Computing (MEC) has become an important concept in modern video communication and broadcasting scenarios to address varied user expectations in an ever-evolving network environment. It has been observed that by preventing redundant access to the Origin Server through backhaul links, caching popular video content in MEC servers minimizes network congestion. Designing an efficient caching mechanism in the MEC server is challenging. To maintain a decent QoE (Quality of Experience) for end-users, we must consider diverse parameters like content popularity, network conditions, etc. In this work, we have proposed a QoE-aware Adaptive BitRate (ABR) caching mechanism at the MEC server using Reinforcement Learning (RL). The proposed model predicts the content popularity of the video and the most preferred video quality for the end-users of a Base Station. In this work, an efficient caching mechanism is devised in the MEC server to provide a decent QoE among the end-users. The primary goal of our RL-based framework is to increase the cache hit rate and reduce the backhaul load while maintaining a satisfactory QoE. Experimental results demonstrate that the proposed model, which emphasizes on video quality, video quality switching, and cache hit rate, outperforms state-of-the-art caching algorithms in terms of the overall QoE reward.
|Eyal Horowicz, Tal Shapira, Yuval Shavitt
|Self-Supervised Traffic Classification: Flow Embedding and Few-Shot Solutions
|Task analysis Transfer learning Data mining Testing Telecommunication traffic Representation learning Payloads Internet traffic classification application identification traffic security management Few-shot learning contrastive representation learning Self-supervised Learning
|Internet traffic classification has been intensively studied over the past decade due to its importance for traffic engineering and cyber security. A promising approach to several traffic classification problems is the FlowPic approach, where histograms of packet sizes in consecutive time slices are transformed into a picture that is fed into a Convolution Neural Network (CNN) model for classification. However, CNNs (and the FlowPic approach included) require a relatively large labeled flow dataset, which is not always easy to obtain. In this paper, we show that we can overcome this obstacle by using Contrastive Representation Learning in order to learn from an unlabeled flow dataset a flow representation that can be embedded in a latent space, enabling clustering of flows belonging to the same class together. We then show that by using just a few labeled flows (a few shots) from each class, we can achieve high accuracy in flow classification. We show that common picture augmentation techniques can help, but accuracy improves further when introducing augmentation techniques that mimic network behavior, such as changes in the RTT (Round-trip time). Finally, we show that we can replace the large FlowPics suggested in the past with much smaller mini-FlowPics and achieve two advantages: improved model performance and easier engineering. Interestingly, this even improves accuracy in some cases.
|Pratyush Dikshit, Mike Kosek, Nils Faulhaber, Jayasree Sengupta, Vaibhav Bajpai
|Evaluating DNS Resiliency and Responsiveness With Truncation, Fragmentation & DoTCP Fallback
|Domain Name System Resilience Probes Internet Time factors Servers IP networks DNS DNS-over-TCP DNS-over-UDP Response Time Failure Rate EDNS(0)
|Since its introduction in 1987, the DNS has become one of the core components of the Internet. While it was designed to work with both TCP and UDP, DNS-over-UDP (DoUDP) has become the default option due to its low overhead. As new Resource Records were introduced, the sizes of DNS responses increased considerably. This expansion of the message body has led to truncation and IP fragmentation more often in recent years where large UDP responses make DNS an easy vector for amplifying denial-of-service attacks which can reduce the resiliency of DNS services. This paper investigates the resiliency, responsiveness, and usage of DoTCP and DoUDP over IPv4 and IPv6 for 10 widely used public DNS resolvers. The paper specifically measures the resiliency of the DNS infrastructure in the age of increasing DNS response sizes that lead to truncation and fragmentation. Our results offer key insights into the management of robust and reliable DNS network services. While DNS Flag Day 2020 recommends 1232 bytes of buffer sizes, we find out that 3/10 resolvers mainly announce very large EDNS(0) buffer sizes both from the edge as well as from the core, which potentially causes fragmentation. In reaction to large response sizes from authoritative name servers, we find that resolvers do not fall back to the usage of DoTCP in many cases, bearing the risk of fragmented responses. As the message sizes in the DNS are expected to grow further, this problem will become more urgent in the future. This paper demonstrates the key results (particularly as a consequence of the DNS Flag Day 2020) which may support network service providers make informed choices to better manage their critical DNS services.
|Hussein M. Hariz, Saeed Sheikhzadeh, Nader Mokari, Mohammad R. Javan, B. Abbasi-Arand, Eduard A. Jorswieck
|AI-Based Radio Resource Management and Trajectory Design for IRS-UAV-Assisted PD-NOMA Communication
|Autonomous aerial vehicles Trajectory NOMA Resource management Industrial Internet of Things Buildings Array signal processing Unmanned aerial vehicles intelligent reflecting surface internet of things age of information trajectory design 6G non-orthogonal multiple access proximal policy optimization
|This paper proposes the use of unmanned aerial vehicles (UAVs) with intelligent reflecting surfaces (IRS) to reflect signals from the industrial internet of things (IIoT) to the destination, where power-domain non-orthogonal multiple access (PD-NOMA) is used in the uplink. The objective of our paper is to minimize the average age of information (AAoI) of users affected by transmit power constraint, and UAV movement restrictions. By optimizing transmit power, sub-carriers, trajectory, and phase shift matrix elements, UAV-IRS on IIoT networks can improve the freshness of the data collected from IIoT devices. The nonlinear integer optimization problem leads to an NP-hard problem, which is practically difficult to solve. We exploit the powerful reinforcement learning algorithm, i.e., the proximal policy optimization (PPO). The numerical results illustrate the benefits of IRS-enabled UAV communication systems. By using IRSs and the PPO algorithm, UAVs can achieve better performance than other methods that consider a fixed IRS, random deployment, other RL methods(A2C), and the impact of UAV jitter.
|Markus Sosnowski, Johannes Zirngibl, Patrick Sattler, Georg Carle, Claas Grohnfeldt, Michele Russo, Daniele Sgandurra
|EFACTLS: Effective Active TLS Fingerprinting for Large-scale Server Deployment Characterization
|Servers Fingerprint recognition Protocols Behavioral sciences Probes Internet Feature extraction Active Scanning TLS Fingerprinting Server Classification Command and Control Servers
|Active measurements allow the collection of server characteristics on a large scale that can aid in discovering hidden relations and commonalities among server deployments. Finding these relations opens up new possibilities for clustering and classifying server deployments; for example, identifying a previously unknown cybercriminal infrastructure can be valuable cyber-threat intelligence. In this work, we propose a methodology based on active measurements to acquire Transport Layer Security (TLS) metadata from servers and leverage it for fingerprinting. Our fingerprints capture characteristic behavior of the TLS stack, primarily influenced by the server’s implementation, configuration, and hardware support. Using an empirical optimization strategy that maximizes information gained from every handshake to minimize measurement costs, we generated 10 general-purpose Client Hellos. They served as scanning probes to create an extensive database of TLS configurations to classify servers. We propose the Shannon Entropy to measure collected information and compare different approaches. This study fingerprinted 8 million servers from the Tranco top list and two Command and Control (C2) blocklists over 60 weeks with weekly snapshots. The resulting data formed the foundation for two long-term case studies: classification of Content Delivery Network and C2 servers. Moreover, the detection was fine-grained enough to detect C2 server families. The proposed methodology demonstrated a precision of 99% and enabled a stable identification of new servers over time. This study shows how active measurements can provide valuable security-relevant insights and improve our understanding of the Internet.
|Xudong Tao, Xiaoyan Qian, Lei Han, Weibei Fan, Yuzhou Shi, Xinrui Zhu, Zhiyu Li, Shuwen Wei, Rui Xu
|Key Flow First Prioritized Flow Scheduling Strategy In Multi-Tenant Data Centers
|Computer networks Data centers Scheduling algorithms Heuristic algorithms Communication system traffic Scheduling Quality of service Data Center Networks Traffic Engineering Flow Scheduling Fair Queueing
|The mixed flow in multi-tenant data centers presents a challenge for priority flow scheduling due to the coexistence of various requirements such as latency and throughput. To address this issue, we propose Key Flow First (KFF), a balanced scheduling algorithm suitable for mixed flows in multi-tenant data centers. Firstly, KFF categorizes flows into Latency-Sensitive Flows (LS Flow) and Throughput-Demanding Flows (TD Flow) based on the Quality of Service (QoS) of their application sources. Secondly, it further differentiates flows into Mice Flows and Elephants Flows based on the amount of already sent bytes. Thirdly, KFF employs the Multi-Level Feedback Queue (MLFQ) threshold update algorithm and a priority-based strict forwarding mechanism. By avoiding reliance on complex flow priors, KFF consistently maintains reasonable scheduling of mixed flows under different load scenarios. Experimental results demonstrate that KFF effectively reduces the real-time load on the network and achieves good performance in terms of MAX (Shortest Job First (SJF), Earliest Deadline First (EDF)) performance under diverse load conditions. Compared to PIAS, KFF reduces the FCT slow down of deadline flows by nearly 60% under high TD loads; compared to Karuma and Time Deadline Aware pFabric (TDA-pFabric), KFF reduces the flow completion time (FCT) slow down of non-deadline Mice flows by over 90% under high LS loads and meanwhile guaranteeing nearly 0 deadline miss rate.
|Chunjing Han, Bohai Guan, Tong Li, Di Kang, Jifeng Qin, Yulei Wu
|Few-Shot Log Anomaly Detection Based On Matching Networks
|Anomaly detection Feature extraction Adaptation models Computational modeling Bidirectional control Data models Behavioral sciences few-shot log anomaly detection bert post-training
|In order to address the problem of log anomaly detection in scenarios with limited labeled log datasets, this paper proposes Log-MatchNet, a novel few-shot log anomaly detection method. To tackle issues such as unstructured log data, diversity, and evolution over time, we employ structured processing and log parsing to convert log content information and template ID into vectors. Feature extraction is performed using the BERT model. Additionally, by integrating multiple datasets and conducting post-training on the BERT model for domain adaptation, we obtain BERT_Post, a module with universal feature extraction capabilities in the log domain. Compared to BERTbase and CyBERT, our method demonstrates superior performance in log anomaly detection, especially in situations with limited labeled datasets. With only 2 annotated normal logs and 2 annotated abnormal logs, BERT_Post achieves a remarkable 16.14% increase in F1-score. Addressing the challenge of imbalanced data, we introduce a matching network that learns the similarity scores between input and prototype vectors, showcasing strong generalization capabilities with an average accuracy of 99.6%. In few-shot scenarios, our method, Log-MatchNet outperforms traditional methods and Proto-Siamese network in terms of F1-score. In an unstable log evolution environment, our method exhibits robustness against noisy data, achieving an F1-score of 81.2% even with 20% injected noise. Compared to LogAnMeta, our approach yields a 31.71% increase in F1-score. Experimental results demonstrate the effectiveness of Log-MatchNet in detecting anomalies in the presence of limited labeled log data and its robust performance in log evolution scenarios.
|Pingping Dong, Xiaojuan Lu, Tairan Huang, Liying Chen, Yang Yang, Lianming Zhang
|Predictive Queue-Based Rate Control for Low Latency in Lossless Data Center Networks
|Throughput Data centers Switches Delays Topology Packet loss Low latency communication lossless data center network congestion control priority-based flow control (PFC) egress queue
|In lossless data center networks (DCN), many existing congestion control schemes are used to address the impact caused by priority-based flow control (PFC), such as congestion spreading, and victim flow problems. However, in some special cases, this problem is not solved. Through observation, we examine the interaction between flow control and congestion control, and realize that the mismatch between hop-by-hop flow control and end-to-end congestion feedback, as well as inaccurate rate regulation, are the root causes of frequent PFC triggering. Therefore, we propose Egress Queue Congestion Information Notification (EQCIN). EQCIN implements threshold-based flow identification to avoid packet buildup due to congestion spreading being considered as the root cause of congestion, while using direct feedback from the congestion side to reduce unnecessary link loss. For different flow identifiers, EQCIN adopts different algorithms to achieve targeted rate control. Experimental results show that EQCIN can reduce the number of PFC PAUSEs tends to zero, compared to TIMELY, DCQCN, DCQCN+TCD and improve the link utilization by 7%-77%, respectively.
|Meng Yue, Qingxin Yan, Zichao Lu, Zhijun Wu
|CCS: A Cross-Plane Collaboration Strategy to Defend Against LDoS Attacks in SDN
|Control systems Switches Feature extraction Collaboration Behavioral sciences Protocols Telecommunication traffic Software-Defined Networking low-rate denial of service attacks cross-plane collaboration
|Software-Defined Networking (SDN) actualizes the separation of control and forwarding, innovates network functionality with a logically centralized controller, and facilitates network-wide collaboration. Contemporary SDN infrastructure exposes potential bottlenecks which are prone to engaging low-rate denial of service (LDoS) attacks. Currently, a great deal of detection methods are deployed in the controller, and the controller needs to poll the switch frequently, which brings heavy load to the controller and the southbound link. According to the analysis of existing researches, we focused on the how to decrease the frequent polling of the controller and improve the detection rate. In this paper, we adopted the idea of cross-plane collaboration and proposed a two-phase detection framework, which carried out the lightweight detection method in the data plane and the in-depth detection based on Bayesian voting mechanism in the control plane. Once LDoS attacks are detected, the controller recalculates routes for the bottleneck nodes using the optimized Dijkstra algorithm to complete mitigation. Theoretical analyses and extensive experiments are conducted to validate the performance of our proposed method. Test results show that our method outperforms other traditional methods in terms of the detection rate of 99.1%, the detection delay of 1.3s and the communication overhead of 1068 Byte/s, the average CPU utilization of controller remains at approximately 3.5%. The proposed method takes a step forward to enhance the security of SDN.
|Bing Shi, Zhifeng Chen, Zhuohan Xu
|A Deep Reinforcement Learning Based Approach for Optimizing Trajectory and Frequency in Energy Constrained Multi-UAV Assisted MEC System
|Autonomous aerial vehicles Task analysis Trajectory Optimization Servers Computer architecture Computational modeling Mobile Edge Computing Unmanned Aerial Vehicle Multi-Agent Deep Reinforcement Learning
|Mobile Edge Computing (MEC) is a technology that shows great promise in enhancing the computational power of smart devices (SDs) in the Internet of Things (IoT). However, the fixed location and limited coverage of MEC servers constrain their performance. To overcome this issue, this paper explores a multiple unmanned aerial vehicle (UAV) assisted MEC system. The proposed system considers a scenario where multiple UAVs work together to provide computing services while dynamically adjusting their frequency based on the task size, under the constraint of limited energy. This paper aims to maximize computation bits, SDs’ fairness, and UAVs’ load balancing in multi-UAV assisted MEC system by jointly optimizing the trajectory and frequency. To address this challenge, we model it as a Partially Observable Markov Decision Process and propose a joint optimization strategy based on multi-agent deep reinforcement learning. The effectiveness of the proposed strategy is evaluated on both synthetic and realistic datasets. The results demonstrate that our strategy outperforms other benchmark strategies.
|Nilesh Kumar Jadav, Sudeep Tanwar
|Whale Optimization-Based Access Control Scheme in D2D Communication Underlaying Cellular Networks
|Device-to-device communication Resource management Copper Interference Access control Throughput Signal to noise ratio Whale Optimization Algorithm Evolutionary algorithm D2D communication Optimization Meta heuristic algorithm Munkres algorithm
|Integration of device-to-device (D2D) communication has gained significant attention within cellular networks as a means to enhance their capacity, coverage, and performance. Despite these advantages, D2D communication encounters various challenges, such as high interference, resource allocation, energy efficiency, and security. In this paper, we investigate the problem associated with resource allocation in D2D communication underlying cellular networks. The existing resource allocation schemes (e.g., game theory and graph theory) do not offer an access control mechanism, due to which the existing schemes are computationally intensive and do not converge to offer a global optimum solution. Toward this goal, we proposed a whale optimization algorithm(WOA)-based access control scheme to enhance the performance of the resource allocation scheme in D2D communication. In WOA, we created a signal-to-interference-plus-noise ratio (SINR)-based objective function that iteratively discovers the best D2D users, allowing them to participate in the resource allocation process. Moreover, for resource allocation, we adopted the Munkres algorithm, which allows only optimized D2D users (from WOA) to reuse the resources of cellular users (CUs). In the proposed work, WOA acts as an access control scheme that optimally finds the best D2D users and only allows them to reuse cellular resources in the Munkres resource assignment problem. Simulation results show that the proposed scheme significantly improves the system’s throughput compared to other existing algorithms. Moreover, other evaluation parameters, such as convergence rate, fairness, WOA update positions, and execution time, show the outperformance of the proposed scheme.
|Ashutosh Balakrishnan, Swades De, Li-Chun Wang
|CASE: A Joint Traffic and Energy Optimization Framework Toward Grid Connected Green Future Networks
|Quality of service Computer aided software engineering Costs Green products Energy efficiency Nonhomogeneous media Load management Dual powered cellular network green communication network services coverage adjustment energy sharing operator profit energy sustainability
|Renewable power provisioning of the base stations (BS) in addition to the traditional power grid connectivity presents an interesting prospect towards realizing green future network services. Designing such dual-powered systems is challenging due to the presence of space-time varying stochasticity in traffic and green energy harvest at each BS. These traffic and green energy imbalances result in non-optimal network green energy utilization and thus resulting in a higher grid energy purchase to the mobile operator. In this paper, we present a novel coverage adjustment and sharing of energy (CASE) framework that exploits the user traffic load and green energy availability imbalances across the networked BSs towards maximizing the operator profit and designing energy sustainable system. The profit maximization problem is formulated considering the networked BSs to have the flexibility of load aware coverage adjustment and green energy sharing capability among themselves, in addition to trading energy with the grid. The proposed CASE framework first leverages the spatio-temporal traffic and energy inhomogeneities and performs load management for maximizing user quality of service (QoS). The CASE strategy then distributes the residual energy imbalance across the BSs and maximizes the utilization of temporal green energy harvest across the BSs. The proposed strategy is compared with only coverage adjustment, only sharing of energy, and a benchmark without CASE based framework. Our simulation results indicate significant improvement in user QoS and operator profit, up to 18% and 39% respectively at high skewness scenario, in addition to fully utilizing the green energy potential in the network.
|Roberto Martínez, Pedro Reviriego, David Larrabeiti
|Supporting Dynamic Insertions in Xor and Binary Fuse Filters With the Integrated XOR/BIF-Bloom Filter
|Fingerprint recognition Matched filters Fuses Memory management Proposals Hash functions Malware Bloom filter xor filter binary fuse filter membership queries dynamic insertions
|Approximate membership check filters are widely used in networking applications to resolve membership queries at high speed with a low memory cost. Due to their extensive use, many filter types have been proposed. Two recent and interesting alternatives are the xor filter and the binary fuse filter, which in certain configurations have one of the lowest false positive rates, are faster and use less memory than other filters. However, one of the main drawbacks of xor and binary fuse filters is that it is not possible to add keys once the filter has been built. This limits their use in many network related applications where keys have to be added dynamically. This paper presents the Integrated xor-Bloom filter (IXOR) and the Integrated binary fuse-Bloom filter (IBIF), both schemes allow dynamic insertions in xor and binary fuse filters without the need to reconstruct the filters. The schemes have been implemented and evaluated showing that a large number of dynamic insertions can be supported with a limited memory overhead and a small impact on the false positive probability and lookup speed. Therefore, the proposed filters can bring the benefits of xor and binary fuse filters to networking applications that need to support dynamic insertions.
|Sam Maesschalck, Will Fantom, Vasileios Giotsas, Nicholas Race
|These Aren’t the PLCs You’re Looking For: Obfuscating PLCs to Mimic Honeypots
|Security Integrated circuits Industrial control Protocols Monitoring Software defined networking Control systems Industrial Control Systems ICS Programmable Logic Controllers PLC Honeypots Security Software-Defined Networking
|Industry 4.0 and the trend of connecting legacy Industrial Control Systems (ICSs) to public networks have exposed these systems to various online threats. To combat these threats, honeypots have been widely used to provide proactive monitoring, detection and deception security capabilities. However, skilled attackers are now adept at fingerprinting and avoiding honeypots. Therefore, we take a fundamentally different approach in this paper. Instead of the honeypot representing a real system, we deploy it as a deterrent. Through obfuscation, the aim is to make an attacker believe the real system is a honeypot and collect threat intelligence data on the attacker. To achieve this, we introduce a new obfuscation technique that allows real ICSs to present themselves as honeypots. By taking advantage of honeypot fingerprinting techniques, we are able to deter attackers from interacting with the real Programmable Logic Controller (PLC) within the industrial network. The approach is implemented and evaluated using different penetration testing tools and an expert evaluation highlighting the benefits of obfuscation in that potential adversaries would be misled into assuming the PLC is a honeypot.
|Ziyi Teng, Juan Fang, Yaqi Liu
|Combining Lyapunov Optimization and Deep Reinforcement Learning for D2D Assisted Heterogeneous Collaborative Edge Caching
|Optimization Device-to-device communication Collaboration Wireless communication Reinforcement learning Costs Energy consumption Edge cache content sharing device-to-device communication deep reinforcement learning Lyapunov optimization
|The problem of shared node selection and cache placement in wireless networks is challenging due to the difficulty of finding low-complexity optimal solutions. This paper proposes a new approach combining Lyapunov optimization and reinforcement learning (LoRL) to address content sharing in heterogeneous mobile edge computing (MEC) networks with base station (BS) and device-to-device (D2D) communication. Device in this network can choose to establish D2D links with neighboring devices for content sharing or send requests directly to the base station for content. Content access and energy consumption of shared nodes are modeled as a queuing system. The goal is to assign content sharing nodes to stabilize all queues while maximizing D2D sharing gain and minimizing latency, even in the presence of unknown network state distribution and user sharing costs. The proposed approach enables edge device to independently select associated nodes and make caching decisions, thereby minimizing time-averaged network costs and stabilizing the queuing system. Experimental results show that the proposed algorithm converges to the optimal policy and outperforms other policies in terms of total queue backlog trade-off and network cost.
|Gustavo F. Camilo, Gabriel Antonio F. Rebello, Lucas Airam C. de Souza, Miguel Elias M. Campista, Luús Henrique M. K. Costa
|ProfitPilot: Enabling Rebalancing in Payment Channel Networks Through Profitable Cycle Creation
|Lightning Network topology Topology Blockchains Bitcoin Routing Robustness blockchain payment channel networks Lightning Network
|Payment Channel Networks (PCNs) have successfully replaced slow global consensus mechanisms with local cryptographic agreements between nodes. As PCN payments heavily depend on network topology for payment routing, strategic node positioning is critical to building cost-effective channels for users and enhancing network robustness against topological attacks. Nevertheless, existing node attachment strategies in the Lightning Network (LN), the most popular PCN, ignore crucial topology issues, such as network centralization and the scarcity of cycles for cheap off-chain rebalancing. In this paper, we first investigate the current state of the LN topology and show that the availability of topology cycles is highly unequal in the network, which exposes the network to several vulnerabilities. Then, we design ProfitPilot, a node positioning strategy that encourages cycle creation in PCNs to reverse the trend in centralization and enable cheap off-chain rebalancing. We compare our proposed algorithm with heuristics available in the Lightning Network and verify that even by focusing on creating cycles, ProfitPilot successfully increases the user’s probability of collecting fees by over 2× while reducing average paying fees. Furthermore, out of all the evaluated heuristics, ProfitPilot presents the fastest increase in network transitivity and mitigates the impact of targeted topological attacks by over 17% compared with the regular Lightning Network operation.
|Nasrin Akhter, Redowan Mahmud, Jiong Jin, Jason But, Iftekhar Ahmad, Yong Xiang
|Configurable Harris Hawks Optimisation for Application Placement in Space-Air-Ground Integrated Networks
|Space-air-ground integrated networks Resource management Computer architecture Satellites Delays Computational modeling Australia Application placement Harris Hawks optimisation (HHO) Space-Air-Ground Integrated Network (SAGIN)
|Space-Air-Ground Integrated Network (SAGIN) has recently emerged as a viable solution for reliable transmission, high data rates, and seamless connectivity with extensive coverage. However, the characteristics of the computation and communication devices located at various levels of SAGIN make application placement within such environments a challenging task. Real-time service expectations and resource requirements of applications further intensify this issue, and push the domain to operate beyond its capacity, resulting in uneven delays and significant overhead. Taking these constraints into account, SAGIN’s application placement problem can be expressed as a multiobjective optimisation problem. This paper aims to solve such a problem using a Dynamic Weight-configurable Harris Hawks Optimisation (DW-HHO) algorithm, considering diverse application contexts such as deadlines, resource usage and the number of application activities. It simultaneously minimises application total service time and host resource overhead with a robust global search. The performance of the proposed solution is compared with benchmark metaheuristic solutions such as PSO, NSGA-II, Greedy and Random. Experimental results demonstrate that DW-HHO outperforms other benchmark metaheuristic solutions in optimising resource utilisation and service delivery time of applications in SAGIN environments. The proposed DW-HHO demonstrates notable improvements over existing methods. Specifically, when evaluating the total service time for PSO, NSGA-II, Greedy, and Random, DW-HHO outperforms these methods by 7.28%, 9.07%, 13.01%, and 14.97%, respectively.
|Debbarni Sarkar, Yogita, Satyendra Singh Yadav, Vipin Pal, Neeraj Kumar, Sarat Kumar Patra
|A Comprehensive Survey On IRS-Assisted NOMA-Based 6G Wireless Network: Design Perspectives, Challenges and Future Directions
|NOMA Wireless communication Resource management Array signal processing Throughput 6G mobile communication 5G mobile communication Intelligent reflecting surfaces non-orthogonal multiple access orthogonal multiple access reflecting units orthogonal frequency division multiplexing multiple input and multiple output sixth-generation
|The propagation environment was uncontrollable in first-generation to fifth-generation (5G) wireless technologies. This behavior of the wireless propagation environment is one of the prime constraints in harnessing the performance of wireless networks. This problem can be addressed in sixth-generation (6G) wireless networks by deploying intelligent reflecting surfaces (IRSs). IRS’s amplitude and phase reflecting coefficient of reflecting units (RUs) can be adjusted via a programmable controller to meet the network requirements. On the other hand, in 5G and 6G wireless communication networks, non-orthogonal multiple access (NOMA) is a robust and well-admired multiple access scheme among the other multiple access counterparts in terms of spectrum efficiency and link capacity. NOMA allows many user equipment (UE) by utilizing non-orthogonal distribution of resources. Therefore, the combination of IRS and NOMA is one of the dominant technologies for 6G wireless networks. Based upon the importance of NOMA and IRS in the initial development of 6G wireless networks, this paper presents a comprehensive survey on IRS-assisted NOMA-based networks, considering their designs and challenges. In this work, the concept and structure of IRS-assisted NOMA have been explained with an in-depth analysis of the frameworks. It also includes some challenges of IRS-assisted NOMA in wireless communication networks. Further, applications and future research directions of IRSassisted NOMA networks are discussed.
|Ayman Younis, Sumit Maheshwari, Dario Pompili
|Energy-Latency Computation Offloading and Approximate Computing in Mobile-Edge Computing Networks
|Task analysis Servers Mobile handsets Approximate computing Real-time systems 5G mobile communication Performance evaluation Mobile Edge Computing Resource Allocation Testbed Computer-vision applications Convex Optimizations
|Task offloading with Mobile-Edge Computing (MEC) is envisioned as a promising technique to prolong battery lifetime and enhance the computational capacity of mobile devices. In this paper, we consider a multi-user MEC system with a Base Station (BS) equipped with a computation server that assists users in executing computation-intensive tasks via offloading. Exploiting approximate computing in MEC, we can trade the output accuracy over a subset of offloading data instead of the entire dataset. We formulate the Energy-Latency-aware Task Offloading and Approximate Computing (ETORS) problem, aiming to optimize the trade-off between energy consumption and latency. Due to the mixed-integer nature of this problem, we employ the Dual-Decomposition Method (DDM) to decompose the original problem into three subproblems—namely the Task-Offloading Decision (TOD), the CPU Frequency Scaling (CFS), and the Quality of Computation Control (QoCC). Our approach consists of two iterative layers: in the outer layer, we adopt the duality technique to find the optimal value of the Lagrangian multiplier associated with the primal problem; and in the inner layer, we formulate the subproblems that can be solved efficiently using convex optimization techniques. Simulation results coupled with real-time experiments on a small-scale MEC testbed show the effectiveness of our proposed resource allocation scheme and its advantages over existing approaches.