Editorial: Transformer Models for Multi-source Visual Fusion and Understanding DOI
Xin Ning, Xiao Bai, Prayag Tiwari

et al.

Information Fusion, Journal Year: 2025, Volume and Issue: unknown, P. 103112 - 103112

Published: March 1, 2025

Language: Английский

Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation DOI Creative Commons
Xingshuo Jing, Kun Qian

Sensors, Journal Year: 2025, Volume and Issue: 25(1), P. 256 - 256

Published: Jan. 5, 2025

Transferring knowledge learned from standard GelSight sensors to other visuotactile is appealing for reducing data collection and annotation. However, such cross-sensor transfer challenging due the differences between in internal light sources, imaging effects, elastomer properties. By understanding collected each type of as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method reduce gaps. We first Global Local Aggregation Bottleneck (GLAB) layer compress features extracted by an encoder, enabling extraction containing key information facilitating unlabeled learning. introduce Fourier-style transformation (FST) module prototype-constrained learning loss promote global conditional domain-adversarial adaptation, bridging style-level also high-confidence guided teacher–student network, utilizing self-distillation mechanism further content-level gaps two domains. Experiments on three real-world robotic shape recognition tasks demonstrate that our outperforms state-of-the-art approaches, particularly achieving 89.8% accuracy DIGIT dataset.

Language: Английский

Citations

0

Editorial: Transformer Models for Multi-source Visual Fusion and Understanding DOI
Xin Ning, Xiao Bai, Prayag Tiwari

et al.

Information Fusion, Journal Year: 2025, Volume and Issue: unknown, P. 103112 - 103112

Published: March 1, 2025

Language: Английский

Citations

0