Rewarded Meta-Pruning: Meta Learning with Rewards for Channel Pruning DOI Creative Commons
Athul Shibu, Abhishek Kumar, Heechul Jung

и другие.

Mathematics, Год журнала: 2023, Номер 11(23), С. 4849 - 4849

Опубликована: Дек. 1, 2023

Convolutional neural networks (CNNs) have gained recognition for their remarkable performance across various tasks. However, the sheer number of parameters and computational demands pose challenges, particularly on edge devices with limited processing power. In response to these this paper presents a novel approach aimed at enhancing efficiency deep learning models. Our method introduces concept accuracy coefficients, offering fine-grained control mechanism balance trade-off between network efficiency. At our core is Rewarded Meta-Pruning algorithm, guiding training generate pruned model weight configurations. The selection based approximations final model’s parameters, it precisely controlled through reward function. This function empowers us tailor optimization process, leading more effective fine-tuning improved performance. Extensive experiments evaluations underscore superiority proposed when compared state-of-the-art techniques. We conducted rigorous pruning well-established architectures such as ResNet-50, MobileNetV1, MobileNetV2. results not only validate efficacy but also highlight its potential significantly advance field compression deployment resource-constrained devices.

Язык: Английский

EvolveNet: Evolving Networks by Learning Scale of Depth and Width DOI Open Access
Athul Shibu, Dong-Gyu Lee

Опубликована: Июль 26, 2023

Convolutional Neural Networks (CNNs) are largely hand-crafted, which leads to inefficiency in the constructed network. Various other algorithms have been proposed address this issue, but inefficiencies resulting from human intervention not addressed. Our EvolveNet algorithm is a task-agnostic evolutionary search that can find optimal depth and width scales automatically an efficient way. The configurations found using grid search, instead evolved existing This eliminates emanate hand-crafting, thus reducing drop accuracy. framework through large space of subnetworks until suitable configuration found. Extensive experiments on ImageNet dataset demonstrate superiority method by outperforming state-of-the-art methods.

Язык: Английский

Процитировано

2

Rewarded Meta-Pruning: Meta Learning with Rewards for Channel Pruning DOI Creative Commons
Athul Shibu, Abhishek Kumar, Heechul Jung

и другие.

Mathematics, Год журнала: 2023, Номер 11(23), С. 4849 - 4849

Опубликована: Дек. 1, 2023

Convolutional neural networks (CNNs) have gained recognition for their remarkable performance across various tasks. However, the sheer number of parameters and computational demands pose challenges, particularly on edge devices with limited processing power. In response to these this paper presents a novel approach aimed at enhancing efficiency deep learning models. Our method introduces concept accuracy coefficients, offering fine-grained control mechanism balance trade-off between network efficiency. At our core is Rewarded Meta-Pruning algorithm, guiding training generate pruned model weight configurations. The selection based approximations final model’s parameters, it precisely controlled through reward function. This function empowers us tailor optimization process, leading more effective fine-tuning improved performance. Extensive experiments evaluations underscore superiority proposed when compared state-of-the-art techniques. We conducted rigorous pruning well-established architectures such as ResNet-50, MobileNetV1, MobileNetV2. results not only validate efficacy but also highlight its potential significantly advance field compression deployment resource-constrained devices.

Язык: Английский

Процитировано

0