Rethinking Maximum Mean Discrepancy for Visual Domain Adaptation DOI
Wei Wang, Haojie Li, Zhengming Ding

et al.

IEEE Transactions on Neural Networks and Learning Systems, Journal Year: 2021, Volume and Issue: 34(1), P. 264 - 277

Published: July 9, 2021

Existing domain adaptation approaches often try to reduce distribution difference between source and target domains respect domain-specific discriminative structures by some [e.g., maximum mean discrepancy (MMD)] distances (e.g., intra-class inter-class distances). However, they usually consider these losses together trade off their relative importance estimating parameters empirically. It is still under insufficient exploration so far deeply study relationships each other that we cannot manipulate them correctly the model's performance degrades. To this end, article theoretically proves two essential facts: 1) minimizing MMD equals jointly data variance with implicit weights but, respectively, maximizing feature discriminability degrades 2) relationship as one falls another rises. Based on this, propose a novel parallel strategies restrain degradation of or expansion distance; specifically: directly impose tradeoff parameter distance in according reformulate special are analogical those ones it can also lead falling 2). Notably, do not model due The experiments several benchmark datasets only prove validity our revealed theoretical results but demonstrate proposed approach could perform better than compared state-of-art methods substantially. Our preliminary MATLAB code will be available at https://github.com/WWLoveTransfer/.

Language: Английский

A Survey of Unsupervised Deep Domain Adaptation DOI Open Access
Garrett Wilson, Diane J. Cook

ACM Transactions on Intelligent Systems and Technology, Journal Year: 2020, Volume and Issue: 11(5), P. 1 - 46

Published: July 5, 2020

Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches supervised have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be case. As complement to this challenge, single-source unsupervised domain adaptation can handle situations where network is trained on labeled source unlabeled related but different target with goal performing well at test-time domain. Many typically homogeneous deep thus been developed, combining powerful, hierarchical representations reduce reliance potentially costly labels. This survey will compare these by examining alternative methods, unique common elements, results, theoretical insights. We follow look application areas open research directions.

Language: Английский

Citations

698

Harmonizing Transferability and Discriminability for Adapting Object Detectors DOI
Chaoqi Chen,

Zebiao Zheng,

Xinghao Ding

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2020, Volume and Issue: unknown, P. 8866 - 8875

Published: June 1, 2020

Recent advances in adaptive object detection have achieved compelling results virtue of adversarial feature adaptation to mitigate the distributional shifts along pipeline. Whilst significantly enhances transferability representations, discriminability detectors remains less investigated. Moreover, and may come at a contradiction given complex combinations objects differentiated scene layouts between domains. In this paper, we propose Hierarchical Transferability Calibration Network (HTCN) that hierarchically (local-region/image/instance) calibrates representations for harmonizing discriminability. The proposed model consists three components: (1) Importance Weighted Adversarial Training with input Interpolation (IWAT-I), which strengthens global by re-weighting interpolated image-level features; (2) Context-aware Instance-Level Alignment (CILA) module, local capturing underlying complementary effect instance-level context information alignment; (3) masks calibrate provide semantic guidance following discriminative pattern alignment. Experimental show HTCN outperforms state-of-the-art methods on benchmark datasets.

Language: Английский

Citations

276

Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation DOI
Guolei Sun, Wenguan Wang, Jifeng Dai

et al.

Lecture notes in computer science, Journal Year: 2020, Volume and Issue: unknown, P. 347 - 365

Published: Jan. 1, 2020

Language: Английский

Citations

274

Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering DOI
Hui Tang, Ke Chen, Kui Jia

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2020, Volume and Issue: unknown, P. 8722 - 8732

Published: June 1, 2020

Unsupervised domain adaptation (UDA) is to make predictions for unlabeled data on a target domain, given labeled source whose distribution shifts from the one. Mainstream UDA methods learn aligned features between two domains, such that classifier trained can be readily applied ones. However, transferring strategy has potential risk of damaging intrinsic discrimination data. To alleviate this risk, we are motivated by assumption structural similarity, and propose directly uncover via discriminative clustering We constrain solutions using regularization hinges our assumed similarity. Technically, use flexible framework deep network based minimizes KL divergence predictive label an introduced auxiliary one; replacing with formed ground-truth labels implements simple joint training. term proposed method as Structurally Regularized Deep Clustering (SRDC), where also enhance intermediate features, soft selection less divergent examples. Careful ablation studies show efficacy SRDC. Notably, no explicit alignment, SRDC outperforms all existing three benchmarks.

Language: Английский

Citations

263

Minimum Class Confusion for Versatile Domain Adaptation DOI
Ying Jin, Ximei Wang, Mingsheng Long

et al.

Lecture notes in computer science, Journal Year: 2020, Volume and Issue: unknown, P. 464 - 480

Published: Jan. 1, 2020

Language: Английский

Citations

261

Unsupervised Domain Adaptation via Structured Prediction Based Selective Pseudo-Labeling DOI Open Access
Qian Wang, Toby P. Breckon

Proceedings of the AAAI Conference on Artificial Intelligence, Journal Year: 2020, Volume and Issue: 34(04), P. 6243 - 6250

Published: April 3, 2020

Unsupervised domain adaptation aims to address the problem of classifying unlabeled samples from target whilst labeled are only available source and data distributions different in these two domains. As a result, classifiers trained suffer significant performance drop when directly applied domain. To this issue, approaches have been proposed learn domain-invariant features or domain-specific classifiers. In either case, lack can be an issue which is usually overcome by pseudo-labeling. Inaccurate pseudo-labeling, however, could result catastrophic error accumulation during learning. paper, we propose novel selective pseudo-labeling strategy based on structured prediction. The idea prediction inspired fact that well clustered within deep feature space so unsupervised clustering analysis used facilitate accurate Experimental results four datasets (i.e. Office-Caltech, Office31, ImageCLEF-DA Office-Home) validate our approach outperforms contemporary state-of-the-art methods.

Language: Английский

Citations

195

Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation DOI
Seungmin Lee, Dong-Wan Kim, Namil Kim

et al.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Journal Year: 2019, Volume and Issue: unknown

Published: Oct. 1, 2019

Recent works on domain adaptation exploit adversarial training to obtain domain-invariant feature representations from the joint learning of extractor and discriminator networks. However, methods render suboptimal performances since they attempt match distributions among domains without considering task at hand. We propose Drop Adapt (DTA), which leverages dropout learn strongly discriminative features by enforcing cluster assumption. Accordingly, we design objective functions support robust adaptation. demonstrate efficacy proposed method various experiments achieve consistent improvements in both image classification semantic segmentation tasks. Our source code is available https://github.com/postBG/DTA.pytorch.

Language: Английский

Citations

181

HoMM: Higher-Order Moment Matching for Unsupervised Domain Adaptation DOI Open Access
Chao Chen, Zhihang Fu, Zhihong Chen

et al.

Proceedings of the AAAI Conference on Artificial Intelligence, Journal Year: 2020, Volume and Issue: 34(04), P. 3422 - 3429

Published: April 3, 2020

Minimizing the discrepancy of feature distributions between different domains is one most promising directions in unsupervised domain adaptation. From perspective moment matching, existing discrepancy-based methods are designed to match second-order or lower moments, which however, have limited expression statistical characteristic for non-Gaussian distributions. In this work, we propose a Higher-order Moment Matching (HoMM) method, and further extend HoMM into reproducing kernel Hilbert spaces (RKHS). particular, our proposed can perform arbitrary-order show that first-order equivalent Maximum Mean Discrepancy (MMD) Correlation Alignment (CORAL). Moreover, (order≥ 3) expected fine-grained alignment as higher-order statistics approximate more complex, Besides, also exploit pseudo-labeled target samples learn discriminative representations domain, improves transfer performance. Extensive experiments conducted, showing consistently outperforms matching by large margin. Codes available at https://github.com/chenchao666/HoMM-Master

Language: Английский

Citations

163

Discriminative Adversarial Domain Adaptation DOI Open Access
Hui Tang, Kui Jia

Proceedings of the AAAI Conference on Artificial Intelligence, Journal Year: 2020, Volume and Issue: 34(04), P. 5940 - 5947

Published: April 3, 2020

Given labeled instances on a source domain and unlabeled ones target domain, unsupervised adaptation aims to learn task classifier that can well classify instances. Recent advances rely domain-adversarial training of deep networks domain-invariant features. However, due an issue mode collapse induced by the separate design classifiers, these methods are limited in aligning joint distributions feature category across domains. To overcome it, we propose novel adversarial learning method termed Discriminative Adversarial Domain Adaptation (DADA). Based integrated classifier, DADA has objective encourages mutually inhibitory relation between predictions for any input instance. We show under practical conditions, it defines minimax game promote distribution alignment. Except traditional closed set adaptation, also extend extremely challenging problem settings partial open adaptation. Experiments efficacy our proposed achieve new state art all three benchmark datasets.

Language: Английский

Citations

163

Temporal Attentive Alignment for Large-Scale Video Domain Adaptation DOI
Min-Hung Chen, Zsolt Kira, Ghassan AlRegib

et al.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Journal Year: 2019, Volume and Issue: unknown

Published: Oct. 1, 2019

Although various image-based domain adaptation (DA) techniques have been proposed in recent years, shift videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA with much larger discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, investigate different integration methods for videos, show that simultaneously aligning learning temporal dynamics achieves effective alignment even without sophisticated methods. Finally, Temporal Attentive Adversarial Adaptation Network (TA 3 N), explicitly attends to the using discrepancy more alignment, achieving state-of-the-art four (e.g. 7.9% accuracy gain over “Source only” from 73.9% 81.8% “HMDB → UCF”, 10.3% “Kinetics Gameplay”). The code data released at http://github.com/cmhungsteve/TA3N.

Language: Английский

Citations

158