Editorial: Transformer Models for Multi-source Visual Fusion and Understanding DOI
Xin Ning, Xiao Bai, Prayag Tiwari

и другие.

Information Fusion, Год журнала: 2025, Номер unknown, С. 103112 - 103112

Опубликована: Март 1, 2025

Язык: Английский

Reducing Cross-Sensor Domain Gaps in Tactile Sensing via Few-Sample-Driven Style-to-Content Unsupervised Domain Adaptation DOI Creative Commons
Xingshuo Jing, Kun Qian

Sensors, Год журнала: 2025, Номер 25(1), С. 256 - 256

Опубликована: Янв. 5, 2025

Transferring knowledge learned from standard GelSight sensors to other visuotactile is appealing for reducing data collection and annotation. However, such cross-sensor transfer challenging due the differences between in internal light sources, imaging effects, elastomer properties. By understanding collected each type of as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method reduce gaps. We first Global Local Aggregation Bottleneck (GLAB) layer compress features extracted by an encoder, enabling extraction containing key information facilitating unlabeled learning. introduce Fourier-style transformation (FST) module prototype-constrained learning loss promote global conditional domain-adversarial adaptation, bridging style-level also high-confidence guided teacher–student network, utilizing self-distillation mechanism further content-level gaps two domains. Experiments on three real-world robotic shape recognition tasks demonstrate that our outperforms state-of-the-art approaches, particularly achieving 89.8% accuracy DIGIT dataset.

Язык: Английский

Процитировано

0

Editorial: Transformer Models for Multi-source Visual Fusion and Understanding DOI
Xin Ning, Xiao Bai, Prayag Tiwari

и другие.

Information Fusion, Год журнала: 2025, Номер unknown, С. 103112 - 103112

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0