Expert Systems with Applications, Год журнала: 2025, Номер unknown, С. 127736 - 127736
Опубликована: Апрель 1, 2025
Язык: Английский
Expert Systems with Applications, Год журнала: 2025, Номер unknown, С. 127736 - 127736
Опубликована: Апрель 1, 2025
Язык: Английский
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Год журнала: 2024, Номер unknown, С. 15731 - 15740
Опубликована: Июнь 16, 2024
Язык: Английский
Процитировано
38IEEE Transactions on Multimedia, Год журнала: 2024, Номер 26, С. 7901 - 7916
Опубликована: Янв. 1, 2024
Knowledge distillation (KD) is a prevalent model compression technique in deep learning, aiming to leverage knowledge from large teacher enhance the training of smaller student model. It has found success deploying compact models intelligent applications like transportation, smart health, and distributed intelligence. Current methods primarily fall into two categories: offline online distillation. Offline involve one-way process, transferring unvaried student, while enable simultaneous multiple peer students. However, existing often face challenges where may not fully comprehend teacher's due capacity gaps, there might be incongruence among outputs students without guidance. To address these issues, we propose novel reciprocal teacher-student learning inspired by human teaching examining through forward feedback (FFKD). Forward operates offline, follows an scheme. The rationale that enables pre-trained receive students, allowing refine its strategies accordingly. achieve this, introduce new weighting constraint gauge extent students' understanding knowledge, which then utilized strategies. Experimental results on five visual recognition datasets demonstrate proposed FFKD outperforms current state-of-the-art methods.
Язык: Английский
Процитировано
20International Journal of Computer Vision, Год журнала: 2025, Номер unknown
Опубликована: Янв. 25, 2025
Язык: Английский
Процитировано
4Pattern Recognition, Год журнала: 2024, Номер 151, С. 110422 - 110422
Опубликована: Март 12, 2024
Язык: Английский
Процитировано
10Neurocomputing, Год журнала: 2025, Номер unknown, С. 129477 - 129477
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
1Neurocomputing, Год журнала: 2025, Номер unknown, С. 129481 - 129481
Опубликована: Янв. 1, 2025
Язык: Английский
Процитировано
1International Journal of Intelligent Networks, Год журнала: 2025, Номер unknown
Опубликована: Фев. 1, 2025
Язык: Английский
Процитировано
1Computers & Graphics, Год журнала: 2024, Номер 123, С. 104015 - 104015
Опубликована: Июль 19, 2024
Deep neural networks have consistently represented the state of art in most computer vision problems. In these scenarios, larger and more complex models demonstrated superior performance to smaller architectures, especially when trained with plenty representative data. With recent adoption Vision Transformer (ViT) based architectures advanced Convolutional Neural Networks (CNNs), total number parameters leading backbone increased from 62M 2012 AlexNet 7B 2024 AIM-7B. Consequently, deploying such deep faces challenges environments processing runtime constraints, particularly embedded systems. This paper covers main model compression techniques applied for tasks, enabling modern be used We present characteristics subareas, compare different approaches, discuss how choose best technique expected variations analyzing it on various devices. also share codes assist researchers new practitioners overcoming initial implementation each subarea trends Model Compression. Case studies are available at \href{https://github.com/venturusbr/cv-model-compression}{https://github.com/venturusbr/cv-model-compression}.
Язык: Английский
Процитировано
6Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 431 - 450
Опубликована: Ноя. 20, 2024
Язык: Английский
Процитировано
5Multimedia Systems, Год журнала: 2024, Номер 30(5)
Опубликована: Сен. 26, 2024
Язык: Английский
Процитировано
4