Knowledge Distillation in Image Classification: The Impact of Datasets DOI Creative Commons
Ange Gabriel Belinga, Stéphane Cédric Koumetio Tekouabou,

Mohamed El Haziti

et al.

Computers, Journal Year: 2024, Volume and Issue: 13(8), P. 184 - 184

Published: July 24, 2024

As the demand for efficient and lightweight models in image classification grows, knowledge distillation has emerged as a promising technique to transfer expertise from complex teacher simpler student models. However, efficacy of is intricately linked choice datasets used during training. Datasets are pivotal shaping model’s learning process, influencing its ability generalize discriminate between diverse patterns. While considerable research independently explored classification, comprehensive understanding how different impact remains critical gap. This study systematically investigates on classification. By varying dataset characteristics such size, domain specificity, inherent biases, we aim unravel nuanced relationship transfer. Our experiments employ range comprehensively explore their performance gains achieved through distillation. contributes valuable guidance researchers practitioners seeking optimize kno-featured applications. elucidating intricate interplay outcomes, our findings empower community make informed decisions when selecting datasets, ultimately advancing field toward more robust model development.

Language: Английский

A Survey on Knowledge Distillation: Recent Advancements DOI Creative Commons
Amir Moslemi,

Anna Briskina,

ZhiChao Dang

et al.

Machine Learning with Applications, Journal Year: 2024, Volume and Issue: 18, P. 100605 - 100605

Published: Nov. 10, 2024

Language: Английский

Citations

4

An incremental intelligent fault diagnosis method based on dual-teacher knowledge distillation and dynamic residual fusion DOI Creative Commons
Zhiwu Shang, Xunbo Wang, Cailu Pan

et al.

Structural Health Monitoring, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 1, 2025

The intelligent fault diagnosis (IFD) method based on incremental learning (IL) can expand new categories without retraining the model, making it a research hotspot in field of diagnosis. Currently, combination knowledge distillation (KD) and replay techniques has been widely used to alleviate catastrophic forgetting IL. However, this still some limitations: first, difference data distribution different tasks may cause concept drift, hindering model’s adaptation tasks; second, lead an imbalance number samples between old classes due limited storage exemplar library, resulting classifier bias. To address these limitations, article proposes IFD (IIFD-DDRF) dual-teacher (DTKD) dynamic residual fusion (DRF) (IIFD-DDRF). First, DTKD strategy is proposed, which transmits through two teacher models, helping student model better adapt while retaining knowledge. Second, DRF proposed handle imbalance. This incorporates lightweight branch layers specific task, encoding task performing optimize output. Additionally, layer merging mechanism adopted effectively prevent excessive growth model. Finally, effectiveness advancement are validated three datasets: bearings gearboxes.

Language: Английский

Citations

0

KED: A Deep-Supervised Knowledge Enhancement Self-Distillation Framework for Model Compression DOI
Yutong Lai,

Dejun Ning,

Shipeng Liu

et al.

IEEE Signal Processing Letters, Journal Year: 2025, Volume and Issue: 32, P. 831 - 835

Published: Jan. 1, 2025

Language: Английский

Citations

0

Joint utilization of positive and negative pseudo-labels in semi-supervised facial expression recognition DOI

Jing Lv,

Yanli Ren, Guorui Feng

et al.

Pattern Recognition, Journal Year: 2024, Volume and Issue: 159, P. 111147 - 111147

Published: Nov. 5, 2024

Language: Английский

Citations

1

Dealing with partial labels by knowledge distillation DOI
Guangtai Wang, Jintao Huang,

Yiqiang Lai

et al.

Pattern Recognition, Journal Year: 2024, Volume and Issue: 158, P. 110965 - 110965

Published: Sept. 3, 2024

Language: Английский

Citations

1

Uncertainty-Aware Topological Persistence Guided Knowledge Distillation on Wearable Sensor Data DOI

Eun Som Jeon,

Matthew P. Buman, Pavan Turaga

et al.

IEEE Internet of Things Journal, Journal Year: 2024, Volume and Issue: 11(18), P. 30413 - 30429

Published: June 11, 2024

In applications involving analysis of wearable sensor data, machine learning techniques that use features from topological data (TDA) have demonstrated remarkable performance. Persistence images (PIs) generated through TDA prove effective in capturing robust features, especially to signal perturbations, thus complementing classical time-series features. Despite its promising performance, utilizing create PI entails significant computational resources and time, posing challenges for on small devices. Knowledge distillation (KD) emerges as a solution address these challenges, it can produce compact model. Using multiple teachers one trained with raw another is viable approach distill single student such case, the two will different statistical characteristics need some form feature harmonization. To tackle issues, we propose uncertainty-aware persistence guided knowledge distillation. This involves separating common distinct components between applying varying weights control their effects. enhance provided student, uncertain are rectified using uncertainty scores. We leverage similarities offer more valuable information employ relationships computed based orthogonal properties prevent excessive transformation. Ultimately, our method yields operates solely at test-time. validate effectiveness proposed empirical evaluations across various combinations models datasets, demonstrating robustness efficacy scenarios. The enhances classification performance model by approximately 4.3% compared learned scratch GENEActiv.

Language: Английский

Citations

0

Dynamic multi teacher knowledge distillation for semantic parsing in KBQA DOI
Ao Zou,

Jun Zou,

Shulin Cao

et al.

Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 263, P. 125599 - 125599

Published: Nov. 12, 2024

Language: Английский

Citations

0

A Heterogeneous Federated Learning Method Based on Dual Teachers Knowledge Distillation DOI
Siyuan Wu, Hao Tian,

Weiran Zhang

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 192 - 207

Published: Dec. 12, 2024

Language: Английский

Citations

0

Increasing opportunities for component reuse on printed circuit boards using deep learning DOI

Nguyen Ngoc Dinh,

Van-Thuan Tran,

Phan Hoang Lam

et al.

International Journal of Environmental Science and Technology, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 29, 2024

Language: Английский

Citations

0

Knowledge Distillation in Image Classification: The Impact of Datasets DOI Creative Commons
Ange Gabriel Belinga, Stéphane Cédric Koumetio Tekouabou,

Mohamed El Haziti

et al.

Computers, Journal Year: 2024, Volume and Issue: 13(8), P. 184 - 184

Published: July 24, 2024

As the demand for efficient and lightweight models in image classification grows, knowledge distillation has emerged as a promising technique to transfer expertise from complex teacher simpler student models. However, efficacy of is intricately linked choice datasets used during training. Datasets are pivotal shaping model’s learning process, influencing its ability generalize discriminate between diverse patterns. While considerable research independently explored classification, comprehensive understanding how different impact remains critical gap. This study systematically investigates on classification. By varying dataset characteristics such size, domain specificity, inherent biases, we aim unravel nuanced relationship transfer. Our experiments employ range comprehensively explore their performance gains achieved through distillation. contributes valuable guidance researchers practitioners seeking optimize kno-featured applications. elucidating intricate interplay outcomes, our findings empower community make informed decisions when selecting datasets, ultimately advancing field toward more robust model development.

Language: Английский

Citations

0