An Improved Sample Selection Framework for Learning with Noisy Labels DOI
Qian Zhang,

Yi Zhu,

Ming Yang

и другие.

Опубликована: Янв. 1, 2023

Since the powerful memory capabilities of deep neural networks, they tend to overfit noisy labels, resulting in degradation discrimination. Sample selection methods that filter out possibly clean labels have been mainstream learning with labels. large gap between size filtered, subset and unlabeled subset, which is particularly obvious under high noise rates, label-free samples sample cannot be fully used, leaving space for performance improvement. This paper proposes an improved Selection framework OverSampling strategy, SOS, overcome this deficiency. It mines useful information carried instances boost models’ by combining oversampling strategy existing SOTA methods. We demonstrate effectiveness SOS through extensive experimental results on both synthetic datasets real-world datasets. The code will available at https://github.com/LanXiaoPang613/SOS.

Язык: Английский

Cross-to-merge training with class balance strategy for learning with noisy labels DOI Creative Commons
Qian Zhang, Yi Zhu, Ming Yang

и другие.

Expert Systems with Applications, Год журнала: 2024, Номер 249, С. 123846 - 123846

Опубликована: Март 29, 2024

The collection of large-scale datasets inevitably introduces noisy labels, leading to a substantial degradation in the performance deep neural networks (DNNs). Although sample selection is mainstream method field learning with which aims mitigate impact labels during model training, testing these methods exhibits significant fluctuations across different noise rates and types. In this paper, we propose Cross-to-Merge Training (C2MT), novel framework that insensitive prior information progress, enhancing robustness. practical implementation, using cross-divided training data, two are cross-trained co-teaching strategy for several local rounds, subsequently merged into unified by performing federated averages on parameters models periodically. Additionally, introduce new class balance strategy, named Median Balance Strategy (MBS), cross-dividing process, evenly divides data labeled subset an unlabeled based estimated loss distribution characteristics. Extensive experimental results both synthetic real-world demonstrate effectiveness C2MT. Code will be available at: https://github.com/LanXiaoPang613/C2MT.

Язык: Английский

Процитировано

19

BPT-PLR: A Balanced Partitioning and Training Framework with Pseudo-Label Relaxed Contrastive Loss for Noisy Label Learning DOI Creative Commons
Qian Zhang, Ge Jin, Yi Zhu

и другие.

Entropy, Год журнала: 2024, Номер 26(7), С. 589 - 589

Опубликована: Июль 10, 2024

While collecting training data, even with the manual verification of experts from crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is difficult and expensive. In dealing datasets that contain noisy labels, over-parameterized deep neural networks (DNNs) tend to overfit, leading poor generalization classification performance. As a result, label learning (NLL) has received significant attention in recent years. Existing research shows although DNNs eventually fit all they first prioritize fitting clean samples, then gradually overfit samples. Mainstream methods utilize this characteristic divide data but face two issues: class imbalance segmented subsets optimization conflict between unsupervised contrastive representation supervised learning. To address these issues, we propose Balanced Partitioning Training framework Pseudo-Label Relaxed loss called BPT-PLR, which includes crucial processes: balanced partitioning process two-dimensional Gaussian mixture model (BP-GMM) semi-supervised oversampling pseudo-label relaxed (SSO-PLR). The former utilizes both semantic feature information prediction results identify introducing balancing strategy maintain balance divided as much possible. latter adopts latest replace loss, reducing conflicts losses improve We validate effectiveness BPT-PLR on four benchmark NLL field: CIFAR-10/100, Animal-10N, Clothing1M. Extensive experiments comparing state-of-the-art demonstrate can achieve optimal or near-optimal

Язык: Английский

Процитировано

6

NoRD: A framework for noise-resilient self-distillation through relative supervision DOI
Saurabh Sharma, Shikhar Singh Lodhi,

V.J. Srivastava

и другие.

Applied Intelligence, Год журнала: 2025, Номер 55(6)

Опубликована: Фев. 15, 2025

Язык: Английский

Процитировано

0

Wave-based cross-phase representation for weakly supervised classification DOI
Heng Zhou, Ping Zhong

Image and Vision Computing, Год журнала: 2025, Номер unknown, С. 105527 - 105527

Опубликована: Апрель 1, 2025

Язык: Английский

Процитировано

0

Orthogonal and spherical quaternion features for weakly supervised learning with label confidence optimization DOI
Heng Zhou, Ping Zhong

Applied Intelligence, Год журнала: 2025, Номер 55(7)

Опубликована: Апрель 28, 2025

Язык: Английский

Процитировано

0

Robust support vector machine based on the bounded asymmetric least squares loss function and its applications in noise corrupted data DOI
Jiaqi Zhang, Hu Yang

Advanced Engineering Informatics, Год журнала: 2025, Номер 65, С. 103371 - 103371

Опубликована: Апрель 29, 2025

Язык: Английский

Процитировано

0

Suppressing label noise in medical image classification using mixup attention and self-supervised learning DOI Creative Commons
Mengdi Gao, Hongyang Jiang, Yan Hu

и другие.

Physics in Medicine and Biology, Год журнала: 2024, Номер 69(10), С. 105026 - 105026

Опубликована: Апрель 18, 2024

Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced the annotation, as labeling process relies expertise experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading performance models. Therefore, this work, we innovatively devise a noise-robust approach to mitigate adverse effects labels classification. Specifically, incorporate contrastive learning intra-group mixup attention strategies into vanilla supervised learning. The for feature extractor helps enhance visual representation DNNs. module constructs groups assigns self-attention weights group-wise samples, subsequently interpolates massive noisy-suppressed samples through weighted operation. We conduct comparative experiments both synthetic real-world datasets under various levels. Rigorous validate that our method with can effectively handle noise, superior state-of-the-art methods. An ablation study also shows components contribute boost model proposed demonstrates its capability curb has certain potential toward clinic applications.

Язык: Английский

Процитировано

3

A Two-Stage Noisy Label Learning Framework with Uniform Consistency Selection and Robust Training DOI
Qian Zhang, Chen Qiu

Опубликована: Янв. 1, 2024

Deep neural networks suffer from overfitting when training samples contain inaccurate annotations (noisy labels), leading to suboptimal performance. In addressing this challenge, current methods for learning with noisy labels employ specific criteria, such as small loss, historical prediction, etc., distinguish clean and instances. Subsequently, semi-supervised techniques are introduced boost Most of them one-stage frameworks that aim achieve optimal sample partitioning robust SSL within a single iteration, thereby increasing difficulty complexity. To address limitation, we propose novel two-stage label framework called UCRT, which consists uniform consistency selection training. the first stage, emphasis lies on creating more accurate set, while second stage uniformly extends set improve model performance by introducing techniques. Comprehensive experiments conducted both synthetic real-world datasets demonstrate stability UCRT across various noise types, showcasing superior compared state-of-the-art methods. The code will be available at: https://github.com/LanXiaoPang613/UCRT.

Язык: Английский

Процитировано

1

Unsupervised domain adaptation with weak source domain labels via bidirectional subdomain alignment DOI
Heng Zhou, Ping Zhong, Daoliang Li

и другие.

Neural Networks, Год журнала: 2024, Номер 178, С. 106418 - 106418

Опубликована: Май 31, 2024

Язык: Английский

Процитировано

1

TBC-MI : Suppressing noise labels by maximizing cleaning samples for robust image classification DOI
Yanhong Li, Zhiqing Guo, Liejun Wang

и другие.

Information Processing & Management, Год журнала: 2024, Номер 61(5), С. 103801 - 103801

Опубликована: Июнь 12, 2024

Язык: Английский

Процитировано

1