Neurocomputing, Год журнала: 2025, Номер unknown, С. 130374 - 130374
Опубликована: Май 1, 2025
Язык: Английский
Neurocomputing, Год журнала: 2025, Номер unknown, С. 130374 - 130374
Опубликована: Май 1, 2025
Язык: Английский
Computers and Electronics in Agriculture, Год журнала: 2024, Номер 223, С. 109090 - 109090
Опубликована: Май 31, 2024
Язык: Английский
Процитировано
63ACM Transactions on Intelligent Systems and Technology, Год журнала: 2023, Номер 14(6), С. 1 - 50
Опубликована: Сен. 11, 2023
Recent advancements in machine learning achieved by Deep Neural Networks (DNNs) have been significant. While demonstrating high accuracy, DNNs are associated with a huge number of parameters and computations, which leads to memory usage energy consumption. As result, deploying on devices constrained hardware resources poses significant challenges. To overcome this, various compression techniques widely employed optimize DNN accelerators. A promising approach is quantization, the full-precision values stored low bit-width precision. Quantization not only reduces requirements but also replaces high-cost operations low-cost ones. quantization offers flexibility efficiency design, making it adopted technique methods. Since has extensively utilized previous works, there need for an integrated report that provides understanding, analysis, comparison different approaches. Consequently, we present comprehensive survey concepts methods, focus image classification. We describe clustering-based methods explore use scale factor parameter approximating values. Moreover, thoroughly review training quantized DNN, including straight-through estimator regularization. explain replacement floating-point bitwise sensitivity layers quantization. Furthermore, highlight evaluation metrics important benchmarks classification task. accuracy state-of-the-art CIFAR-10 ImageNet. This article attempts make readers familiar basic advanced introduce works challenges future research this field.
Язык: Английский
Процитировано
58Transactions of the Association for Computational Linguistics, Год журнала: 2024, Номер 12, С. 1556 - 1577
Опубликована: Янв. 1, 2024
Abstract Large Language Models (LLMs) have transformed natural language processing tasks successfully. Yet, their large size and high computational needs pose challenges for practical use, especially in resource-limited settings. Model compression has emerged as a key research area to address these challenges. This paper presents survey of model techniques LLMs. We cover methods like quantization, pruning, knowledge distillation, highlighting recent advancements. also discuss benchmarking strategies evaluation metrics crucial assessing compressed offers valuable insights researchers practitioners, aiming enhance efficiency real-world applicability LLMs while laying foundation future
Язык: Английский
Процитировано
23Applied Intelligence, Год журнала: 2024, Номер 54(22), С. 11804 - 11844
Опубликована: Сен. 2, 2024
Abstract This paper critically examines model compression techniques within the machine learning (ML) domain, emphasizing their role in enhancing efficiency for deployment resource-constrained environments, such as mobile devices, edge computing, and Internet of Things (IoT) systems. By systematically exploring lightweight design architectures, it is provided a comprehensive understanding operational contexts effectiveness. The synthesis these strategies reveals dynamic interplay between performance computational demand, highlighting balance required optimal application. As models grow increasingly complex data-intensive, demand resources memory has surged accordingly. escalation presents significant challenges artificial intelligence (AI) systems real-world applications, particularly where hardware capabilities are limited. Therefore, not merely advantageous but essential ensuring that can be utilized across various domains, maintaining high without prohibitive resource requirements. Furthermore, this review underscores importance sustainable development. introduction hybrid methods, which combine multiple techniques, promises to deliver superior efficiency. Additionally, development intelligent frameworks capable selecting most appropriate strategy based on specific application needs crucial advancing field. practical examples engineering applications discussed demonstrate impact techniques. optimizing complexity efficiency, ensures advancements AI technology remain widely applicable. thus contributes academic discourse guides innovative solutions efficient responsible practices, paving way future Graphical abstract
Язык: Английский
Процитировано
18ISPRS Journal of Photogrammetry and Remote Sensing, Год журнала: 2024, Номер 209, С. 368 - 382
Опубликована: Фев. 20, 2024
Remote sensing image scene classification (RSI-SC) is crucial for various high-level applications, including RSI retrieval, captioning, and object detection. Deep learning-based methods can accurately predict categories. However, these approaches often require numerous labeled samples training, limiting their practicality in real-world RS applications with scarce label resources. In contrast, few-shot remote (FS-RSI-SC) has garnered substantial research interest owing to its potential mitigate the need extensive training samples. recent years, there been a surge studies on FS-RSI-SC. This paper presents comprehensive overview of FS-RSI-SC research, categorizing existing into two groups. The first group comprises based data augmentation, transfer learning, metric meta-learning. Our analysis reveals that most fall meta-learning category, employing attention mechanisms, self-supervised learning (SSL), feature fusion techniques enhanced performance. Additionally, consistently outperform other this category. second centered around large-scale pre-training, which demonstrated remarkable competitiveness across tasks, special shown considerable expected attract more increasing popularity pre-training unimodal multimodal foundation models. Moreover, we proposed pipeline harnesses capabilities powerful large vision-language models (VLMs) as encoders, establishing new baselines commonly used datasets under standard experimental settings. empirical results validated effectiveness utilizing VLMs highlighted Through joint state-of-the-art our experiments VLMs, identified prevailing challenges outlined promising directions future research.
Язык: Английский
Процитировано
11Measurement, Год журнала: 2025, Номер unknown, С. 117144 - 117144
Опубликована: Фев. 1, 2025
Язык: Английский
Процитировано
1Internet of Things, Год журнала: 2025, Номер unknown, С. 101553 - 101553
Опубликована: Март 1, 2025
Язык: Английский
Процитировано
1Advanced Engineering Informatics, Год журнала: 2024, Номер 62, С. 102786 - 102786
Опубликована: Авг. 24, 2024
Язык: Английский
Процитировано
8Signals and communication technology, Год журнала: 2024, Номер unknown, С. 107 - 130
Опубликована: Янв. 1, 2024
Язык: Английский
Процитировано
7Sensors, Год журнала: 2024, Номер 24(23), С. 7480 - 7480
Опубликована: Ноя. 23, 2024
This systematic review critically evaluates the current state and future potential of real-time, end-to-end smart, automated irrigation management systems, focusing on integrating Internet Things (IoTs) machine learning technologies for enhanced agricultural water use efficiency crop productivity. In this review, automation each component is examined in pipeline from data collection to application while analyzing its effectiveness, efficiency, integration with various precision agriculture technologies. It also investigates role interoperability, standardization, cybersecurity IoT-based solutions applications. Furthermore, existing gaps are identified proposed seamless across multiple sensor suites aiming achieve fully autonomous scalable management. The findings highlight transformative systems address global food challenges by optimizing maximizing yields.
Язык: Английский
Процитировано
7