An Improved Sample Selection Framework for Learning with Noisy Labels DOI
Qian Zhang,

Yi Zhu,

Ming Yang

et al.

Published: Jan. 1, 2023

Since the powerful memory capabilities of deep neural networks, they tend to overfit noisy labels, resulting in degradation discrimination. Sample selection methods that filter out possibly clean labels have been mainstream learning with labels. large gap between size filtered, subset and unlabeled subset, which is particularly obvious under high noise rates, label-free samples sample cannot be fully used, leaving space for performance improvement. This paper proposes an improved Selection framework OverSampling strategy, SOS, overcome this deficiency. It mines useful information carried instances boost models’ by combining oversampling strategy existing SOTA methods. We demonstrate effectiveness SOS through extensive experimental results on both synthetic datasets real-world datasets. The code will available at https://github.com/LanXiaoPang613/SOS.

Language: Английский

Cross-to-merge training with class balance strategy for learning with noisy labels DOI Creative Commons
Qian Zhang, Yi Zhu, Ming Yang

et al.

Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 249, P. 123846 - 123846

Published: March 29, 2024

The collection of large-scale datasets inevitably introduces noisy labels, leading to a substantial degradation in the performance deep neural networks (DNNs). Although sample selection is mainstream method field learning with which aims mitigate impact labels during model training, testing these methods exhibits significant fluctuations across different noise rates and types. In this paper, we propose Cross-to-Merge Training (C2MT), novel framework that insensitive prior information progress, enhancing robustness. practical implementation, using cross-divided training data, two are cross-trained co-teaching strategy for several local rounds, subsequently merged into unified by performing federated averages on parameters models periodically. Additionally, introduce new class balance strategy, named Median Balance Strategy (MBS), cross-dividing process, evenly divides data labeled subset an unlabeled based estimated loss distribution characteristics. Extensive experimental results both synthetic real-world demonstrate effectiveness C2MT. Code will be available at: https://github.com/LanXiaoPang613/C2MT.

Language: Английский

Citations

19

BPT-PLR: A Balanced Partitioning and Training Framework with Pseudo-Label Relaxed Contrastive Loss for Noisy Label Learning DOI Creative Commons
Qian Zhang, Ge Jin, Yi Zhu

et al.

Entropy, Journal Year: 2024, Volume and Issue: 26(7), P. 589 - 589

Published: July 10, 2024

While collecting training data, even with the manual verification of experts from crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is difficult and expensive. In dealing datasets that contain noisy labels, over-parameterized deep neural networks (DNNs) tend to overfit, leading poor generalization classification performance. As a result, label learning (NLL) has received significant attention in recent years. Existing research shows although DNNs eventually fit all they first prioritize fitting clean samples, then gradually overfit samples. Mainstream methods utilize this characteristic divide data but face two issues: class imbalance segmented subsets optimization conflict between unsupervised contrastive representation supervised learning. To address these issues, we propose Balanced Partitioning Training framework Pseudo-Label Relaxed loss called BPT-PLR, which includes crucial processes: balanced partitioning process two-dimensional Gaussian mixture model (BP-GMM) semi-supervised oversampling pseudo-label relaxed (SSO-PLR). The former utilizes both semantic feature information prediction results identify introducing balancing strategy maintain balance divided as much possible. latter adopts latest replace loss, reducing conflicts losses improve We validate effectiveness BPT-PLR on four benchmark NLL field: CIFAR-10/100, Animal-10N, Clothing1M. Extensive experiments comparing state-of-the-art demonstrate can achieve optimal or near-optimal

Language: Английский

Citations

6

NoRD: A framework for noise-resilient self-distillation through relative supervision DOI
Saurabh Sharma, Shikhar Singh Lodhi,

V.J. Srivastava

et al.

Applied Intelligence, Journal Year: 2025, Volume and Issue: 55(6)

Published: Feb. 15, 2025

Language: Английский

Citations

0

Wave-based cross-phase representation for weakly supervised classification DOI
Heng Zhou, Ping Zhong

Image and Vision Computing, Journal Year: 2025, Volume and Issue: unknown, P. 105527 - 105527

Published: April 1, 2025

Language: Английский

Citations

0

Orthogonal and spherical quaternion features for weakly supervised learning with label confidence optimization DOI
Heng Zhou, Ping Zhong

Applied Intelligence, Journal Year: 2025, Volume and Issue: 55(7)

Published: April 28, 2025

Language: Английский

Citations

0

Robust support vector machine based on the bounded asymmetric least squares loss function and its applications in noise corrupted data DOI
Jiaqi Zhang, Hu Yang

Advanced Engineering Informatics, Journal Year: 2025, Volume and Issue: 65, P. 103371 - 103371

Published: April 29, 2025

Language: Английский

Citations

0

Suppressing label noise in medical image classification using mixup attention and self-supervised learning DOI Creative Commons
Mengdi Gao, Hongyang Jiang, Yan Hu

et al.

Physics in Medicine and Biology, Journal Year: 2024, Volume and Issue: 69(10), P. 105026 - 105026

Published: April 18, 2024

Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced the annotation, as labeling process relies expertise experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading performance models. Therefore, this work, we innovatively devise a noise-robust approach to mitigate adverse effects labels classification. Specifically, incorporate contrastive learning intra-group mixup attention strategies into vanilla supervised learning. The for feature extractor helps enhance visual representation DNNs. module constructs groups assigns self-attention weights group-wise samples, subsequently interpolates massive noisy-suppressed samples through weighted operation. We conduct comparative experiments both synthetic real-world datasets under various levels. Rigorous validate that our method with can effectively handle noise, superior state-of-the-art methods. An ablation study also shows components contribute boost model proposed demonstrates its capability curb has certain potential toward clinic applications.

Language: Английский

Citations

3

A Two-Stage Noisy Label Learning Framework with Uniform Consistency Selection and Robust Training DOI
Qian Zhang, Chen Qiu

Published: Jan. 1, 2024

Deep neural networks suffer from overfitting when training samples contain inaccurate annotations (noisy labels), leading to suboptimal performance. In addressing this challenge, current methods for learning with noisy labels employ specific criteria, such as small loss, historical prediction, etc., distinguish clean and instances. Subsequently, semi-supervised techniques are introduced boost Most of them one-stage frameworks that aim achieve optimal sample partitioning robust SSL within a single iteration, thereby increasing difficulty complexity. To address limitation, we propose novel two-stage label framework called UCRT, which consists uniform consistency selection training. the first stage, emphasis lies on creating more accurate set, while second stage uniformly extends set improve model performance by introducing techniques. Comprehensive experiments conducted both synthetic real-world datasets demonstrate stability UCRT across various noise types, showcasing superior compared state-of-the-art methods. The code will be available at: https://github.com/LanXiaoPang613/UCRT.

Language: Английский

Citations

1

Unsupervised domain adaptation with weak source domain labels via bidirectional subdomain alignment DOI
Heng Zhou, Ping Zhong, Daoliang Li

et al.

Neural Networks, Journal Year: 2024, Volume and Issue: 178, P. 106418 - 106418

Published: May 31, 2024

Language: Английский

Citations

1

TBC-MI : Suppressing noise labels by maximizing cleaning samples for robust image classification DOI
Yanhong Li, Zhiqing Guo, Liejun Wang

et al.

Information Processing & Management, Journal Year: 2024, Volume and Issue: 61(5), P. 103801 - 103801

Published: June 12, 2024

Language: Английский

Citations

1