Deploying AI on Edge: Advancement and Challenges in Edge Intelligence DOI Creative Commons
Tianyu Wang, Jinyang Guo, Bowen Zhang

et al.

Mathematics, Journal Year: 2025, Volume and Issue: 13(11), P. 1878 - 1878

Published: June 4, 2025

In recent years, artificial intelligence (AI) has achieved significant progress and remarkable advancements across various disciplines, including biology, computer science, industry. However, the increasing complexity of AI network structures vast number associated parameters impose substantial computational storage demands, severely limiting practical deployment these models on resource-constrained edge devices. Although methods have been proposed to alleviate burdens, they still face multiple persistent challenges, such as large-scale model deployment, poor interpretability, privacy security vulnerabilities, energy efficiency constraints. This article systematically reviews current in technologies, highlights key enabling techniques sparsity, quantization, knowledge distillation, neural architecture search, federated learning, explores their applications industrial, automotive, healthcare, consumer domains. Furthermore, this paper presents a comparative analysis techniques, summarizes major trade-offs, proposes decision frameworks guide strategies under different scenarios. Finally, it discusses future research directions address remaining technical bottlenecks promote sustainable development intelligence. Standing at threshold an exciting new era, we believe will play increasingly critical role transforming industries ubiquitous intelligent services.

Language: Английский

Edge AI: A survey DOI Creative Commons
Raghubir Singh, Sukhpal Singh Gill

Internet of Things and Cyber-Physical Systems, Journal Year: 2023, Volume and Issue: 3, P. 71 - 92

Published: Jan. 1, 2023

Artificial Intelligence (AI) at the edge is utilization of AI in real-world devices. Edge refers to practice doing computations near users network's edge, instead centralised location like a cloud service provider's data centre. With latest innovations efficiency, proliferation Internet Things (IoT) devices, and rise computing, potential has now been unlocked. This study provides thorough analysis approaches capabilities as they pertain or AI. Further, detailed survey computing its paradigms including transition presented explore background each variant proposed for implementing Computing. Furthermore, we discussed approach deploying algorithms models on which are typically resource-constrained devices located network. We also technology used various modern IoT applications, autonomous vehicles, smart homes, industrial automation, healthcare, surveillance. Moreover, discussion leveraging machine learning optimized environments presented. Finally, important open challenges research directions field have identified investigated. hope that this article will serve common goal future blueprint unite stakeholders facilitates accelerate development

Language: Английский

Citations

167

Anomaly Traffic Detection Based on Communication-Efficient Federated Learning in Space-Air-Ground Integration Network DOI
Haitao Xu,

Shuying Han,

Xuhui Li

et al.

IEEE Transactions on Wireless Communications, Journal Year: 2023, Volume and Issue: 22(12), P. 9346 - 9360

Published: May 1, 2023

In this paper, we study the architectures of space-air-ground integration network (SAGIN) proposed by domestic scientific research institutes, and put forward an collaborative federal learning architecture suitable for SAGIN to solve problems insecurity low timeliness caused traffic backhaul. An anomaly detection method is based on requirements characteristics SAGIN. The problem that it difficult manually label extract features in solved through improvement deep algorithm. challenge lack professionals labeling training set studying semi supervision. artificial feature engineering end-to-end Finally, design a simulation environment SAGIN, verify feasibility advanced nature methods.

Language: Английский

Citations

65

Collaborative and privacy-preserving retired battery sorting for profitable direct recycling via federated machine learning DOI Creative Commons
Shengyu Tao, Haizhou Liu,

Chongbo Sun

et al.

Nature Communications, Journal Year: 2023, Volume and Issue: 14(1)

Published: Dec. 5, 2023

Unsorted retired batteries with varied cathode materials hinder the adoption of direct recycling due to their cathode-specific nature. The surge in necessitates precise sorting for effective recycling, but challenges arise from varying operational histories, diverse manufacturers, and data privacy concerns collaborators (data owners). Here we show, a unique dataset 130 lithium-ion spanning 5 7 federated machine learning approach can classify these without relying on past data, safeguarding collaborators. By utilizing features extracted end-of-life charge-discharge cycle, our model exhibits 1% 3% errors under homogeneous heterogeneous battery settings respectively, attributed innovative Wasserstein-distance voting strategy. Economically, proposed method underscores value prosperous sustainable industry. This study heralds new paradigm using privacy-sensitive sources, facilitating collaborative privacy-respecting decision-making distributed systems.

Language: Английский

Citations

47

Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip DOI Creative Commons
Man Yao, Ole Richter, Guangshe Zhao

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: May 25, 2024

Abstract By mimicking the neurons and synapses of human brain employing spiking neural networks on neuromorphic chips, computing offers a promising energy-efficient machine intelligence. How to borrow high-level dynamic mechanisms help achieve energy advantages is fundamental issue. This work presents an application-oriented algorithm-software-hardware co-designed system for this First, we design fabricate asynchronous chip called “Speck”, sensing-computing chip. With low processor resting power 0.42mW, Speck can satisfy hardware requirements computing: no-input consumes no energy. Second, uncover “dynamic imbalance” in develop attention-based framework achieving algorithmic varied inputs consume with large variance. Together, demonstrate real-time as 0.70mW. exhibits potentials its event-driven, sparse, nature.

Language: Английский

Citations

26

Federated Learning-Empowered Mobile Network Management for 5G and Beyond Networks: From Access to Core DOI
Joohyung Lee, Faranaksadat Solat, Tae Yeon Kim

et al.

IEEE Communications Surveys & Tutorials, Journal Year: 2024, Volume and Issue: 26(3), P. 2176 - 2212

Published: Jan. 1, 2024

The fifth generation (5G) and beyond wireless networks are envisioned to provide an integrated communication computing platform that will enable multipurpose intelligent driven by a growing demand for both traditional end users industry verticals. This evolution be realized innovations in core access capabilities, mainly from virtualization technologies ultra-dense networks, e.g., software-defined networking (SDN), network slicing, function (NFV), multi-access edge (MEC), terahertz (THz) communications, etc. However, those require increased complexity of resource management large configurations slices. In this new milieu, with the help artificial intelligence (AI), operators strive AI-empowered automating radio orchestration processes data-driven manner. regard, most previous approaches adopt centralized training paradigm where diverse data generated at functions over distributed base stations associated MEC servers transferred central server. On other hand, exploit parallel processing capabilities entities fast secure manner, federated learning (FL) has emerged as AI approach can many allowing without need transmission article comprehensively surveys field FL-empowered mobile 5G core. Specifically, we begin introduction state-of-the-art FL exploring analyzing recent advances general. Then, extensive survey management, including background on functions, traffic prediction, core/access regarding standardization research activities. We then present highlighting how is adopted management. Important lessons learned review also provided. Finally, complement discussing open issues possible directions future important emerging area.

Language: Английский

Citations

23

Decentral and Incentivized Federated Learning Frameworks: A Systematic Literature Review DOI Creative Commons
Leon Witt,

Mathis Heyer,

Kentaroh Toyoda

et al.

IEEE Internet of Things Journal, Journal Year: 2022, Volume and Issue: 10(4), P. 3642 - 3663

Published: Dec. 22, 2022

The advent of federated learning (FL) has sparked a new paradigm parallel and confidential decentralized machine (ML) with the potential utilizing computational power vast number Internet Things (IoT), mobile, edge devices without data leaving respective device, thus ensuring privacy by design. Yet, simple FL frameworks (FLFs) naively assume an honest central server altruistic client participation. In order to scale this beyond small groups already entrusted entities toward mass adoption, FLFs must be: 1) truly 2) incentivized participants. This systematic literature review is first analyze that holistically apply both, blockchain technology decentralize process reward mechanisms incentivize 422 publications were retrieved querying 12 major scientific databases. After filtering process, 40 articles remained for in-depth examination following our five research questions. To ensure correctness findings, we verified results authors. Although having direct future distributed secure artificial intelligence, none analyzed production ready. approaches vary heavily in terms use cases, system design, solved issues, thoroughness. We provide approach classify quantify differences between FLFs, expose limitations current works derive directions novel domain.

Language: Английский

Citations

56

Reinforcement Learning-Based Physical Cross-Layer Security and Privacy in 6G DOI
Xiaozhen Lu, Liang Xiao, Pengmin Li

et al.

IEEE Communications Surveys & Tutorials, Journal Year: 2022, Volume and Issue: 25(1), P. 425 - 466

Published: Nov. 24, 2022

Sixth-generation (6G) cellular systems will have an inherent vulnerability to physical (PHY)-layer attacks and privacy leakage, due the large-scale heterogeneous networks with booming time-sensitive applications. Important wireless techniques including non-orthogonal multiple access, mobile edge computing, millimeter-wave, massive multiple-input multiple-output, visible light communication, terahertz, intelligent reflecting surface can improve spectrum efficiency quality-of-service but raise challenges for 6G PHY cross-layer security protection. Existing optimization based protection schemes such as convex method rely on accurate attack patterns strategies thus suffer from performance degradation in that shorter communication latency, more devices higher than 5G. Reinforcement learning (RL) algorithms help optimize their policies enhance dynamic against smart without depending model. Therefore, this article provides a comprehensive survey RL In article, we investigate potential discuss solutions. A brief overview of reinforcement is provided. Afterward, review PHY-layer how apply scenarios, especially focusing game jammers, eavesdroppers, spoofers inference attackers. The solutions unmanned aerial vehicles (UAVs) scenarios are also reviewed. future research directions identified corresponding discussed 6G.

Language: Английский

Citations

53

Energy Harvesting UAV-RIS-Assisted Maritime Communications Based on Deep Reinforcement Learning Against Jamming DOI
Helin Yang,

Kailong Lin,

Liang Xiao

et al.

IEEE Transactions on Wireless Communications, Journal Year: 2024, Volume and Issue: 23(8), P. 9854 - 9868

Published: Feb. 27, 2024

With the rapid development of maritime activities, efficient and reliable communications have attracted ever-increasing attention, mounting reconfigurable intelligent surface (RIS) on unmanned aerial vehicle (UAV), called UAV-RIS, can provide flexible adaptable services for communications. In this paper, we investigate a UAV-RIS-assisted communication system under malicious jammer, where UAV-RIS is deployed to jointly adjust its placement RIS elements maximize energy efficiency (EE) guarantee quality service requirements against jamming attacks. addition, an adaptive harvesting scheme developed information transmission (IT) (EH) simultaneously enhance endurance UAV by deploying different IT times each element. Considering non-convex optimization problem highly complex environments, resource management approach based deep reinforcement learning proposed optimize base station's transmit power, RISs reflecting beamforming. Furthermore, hindsight experience replay adopted improve performance. The simulation results demonstrate that achieves better EE EH performances real-world settings compared with existing popular approaches.

Language: Английский

Citations

11

Brain-inspired computing systems: a systematic literature review DOI Creative Commons

Mohamadreza Zolfagharinejad,

Unai Alegre-Ibarra, Tao Chen

et al.

The European Physical Journal B, Journal Year: 2024, Volume and Issue: 97(6)

Published: June 1, 2024

Abstract Brain-inspired computing is a growing and interdisciplinary area of research that investigates how the computational principles biological brain can be translated into hardware design to achieve improved energy efficiency. encompasses various subfields, including neuromorphic in-memory computing, have been shown outperform traditional digital in executing specific tasks. With rising demand for more powerful yet energy-efficient large-scale artificial neural networks , brain-inspired emerging as promising solution enabling expanding AI edge. However, vast scope field has made it challenging compare assess effectiveness solutions compared state-of-the-art counterparts. This systematic literature review provides comprehensive overview latest advances hardware. To ensure accessibility researchers from diverse backgrounds, we begin by introducing key concepts pointing out respective in-depth topical reviews. We continue with categorizing dominant platforms. highlight studies potential applications could greatly benefit systems their reported accuracy. Finally, fair comparison performance different approaches, employ standardized normalization approach efficiency reports literature. Graphical abstract

Language: Английский

Citations

10

Reinforcement Learning Based Energy-Efficient Collaborative Inference for Mobile Edge Computing DOI
Yilin Xiao, Liang Xiao, Kunpeng Wan

et al.

IEEE Transactions on Communications, Journal Year: 2022, Volume and Issue: 71(2), P. 864 - 876

Published: Dec. 14, 2022

Collaborative inference in mobile edge computing (MEC) enables devices to offload the computation tasks for computation-intensive perception services, and policy determines latency energy consumption. The optimal depends on performance model of deep learning, data generation network that are rarely known by time. In this paper, we propose a multi-agent reinforcement learning (RL) based energy-efficient MEC collaborative scheme, which each device choose both partition point image quantity, channel conditions previous performance. A experience exchange mechanism exploits Q-values neighboring accelerate optimization with less We also provide RL scheme large-scale networks, an actor yields probability distribution critic guides weight update enhance sample efficiency. bound analyze computational complexity. Both simulation experimental results show our proposed schemes reduce save

Language: Английский

Citations

33