SANet: Selective Aggregation Network for unsupervised object re-identification DOI
Minghui Lin, Jianhua Tang,

Longbin Fu

и другие.

Computer Vision and Image Understanding, Год журнала: 2024, Номер 250, С. 104232 - 104232

Опубликована: Ноя. 15, 2024

Язык: Английский

MoSCE-ReID: Mixture of semantic clustering experts for person re-identification DOI
Kai Ren, Chuanping Hu, Hao Xi

и другие.

Neurocomputing, Год журнала: 2025, Номер unknown, С. 129587 - 129587

Опубликована: Фев. 1, 2025

Язык: Английский

Процитировано

1

MambaReID: Exploiting Vision Mamba for Multi-Modal Object Re-Identification DOI Creative Commons
Ruijuan Zhang, Lizhong Xu, Yang Song

и другие.

Sensors, Год журнала: 2024, Номер 24(14), С. 4639 - 4639

Опубликована: Июль 17, 2024

Multi-modal object re-identification (ReID) is a challenging task that seeks to identify objects across different image modalities by leveraging their complementary information. Traditional CNN-based methods are constrained limited receptive fields, whereas Transformer-based approaches hindered high computational demands and lack of convolutional biases. To overcome these limitations, we propose novel fusion framework named MambaReID, integrating the strengths both architectures with effective VMamba. Specifically, our MambaReID consists three components: Three-Stage VMamba (TSV), Dense Mamba (DM), Consistent Fusion (CVF). TSV efficiently captures global context information local details low complexity. DM enhances feature discriminability fully inter-modality shallow deep features through dense connections. Additionally, well-aligned multi-modal images, CVF provides more granular modal aggregation, thereby improving robustness. The framework, its innovative components, not only achieves superior performance in ReID tasks, but also does so fewer parameters lower costs. Our proposed MambaReID's effectiveness validated extensive experiments conducted on benchmarks.

Язык: Английский

Процитировано

3

Towards Effective Rotation Generalization in UAV Object Re-Identification DOI
Shuoyi Chen, Mang Ye, Yan Huang

и другие.

IEEE Transactions on Information Forensics and Security, Год журнала: 2025, Номер 20, С. 2593 - 2606

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Inter-block ladder-style transformer model with multi-subspace feature adjustment for object re-identification DOI
Zhi Yu, Zhiyong Huang, Mingyang Hou

и другие.

Applied Soft Computing, Год журнала: 2025, Номер unknown, С. 112961 - 112961

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

Zebrafish identification with deep CNN and ViT architectures using a rolling training window DOI Creative Commons
Jason Puchalla, Aaron Serianni, Bo Deng

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Март 12, 2025

Zebrafish are widely used in vertebrate studies, yet minimally invasive individual tracking and identification the lab setting remain challenging due to complex time-variable conditions. Advancements machine learning, particularly neural networks, offer new possibilities for developing simple robust protocols that adapt changing We demonstrate a rolling window training technique suitable use with open-source convolutional networks (CNN) vision transformers (ViT) shows promise robustly identifying maturing zebrafish groups over several weeks. The provides high-fidelity method monitoring temporally evolving classes, potentially significantly reducing need images both CNN ViT architectures. To understand success of classifier inform future real-time zebrafish, we analyzed impact shape, pattern, color by modifying set compared test results other prevalent learning models.

Язык: Английский

Процитировано

0

Feature-Tuning Hierarchical Transformer via token communication and sample aggregation constraint for object re-identification DOI
Zhi Yu, Zhiyong Huang, Mingyang Hou

и другие.

Neural Networks, Год журнала: 2025, Номер unknown, С. 107394 - 107394

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

Nystromformer based cross-modality transformer for visible-infrared person re-identification DOI Creative Commons

Ranjit Kumar Mishra,

Arijit Mondal, Jimson Mathew

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Май 9, 2025

Язык: Английский

Процитировано

0

Vehicle Re-Identification Method Based on Multi-Task Learning in Foggy Scenarios DOI Creative Commons
Wenchao Gao, Yifan Chen,

Chuanrui Cui

и другие.

Mathematics, Год журнала: 2024, Номер 12(14), С. 2247 - 2247

Опубликована: Июль 19, 2024

Vehicle re-identification employs computer vision to determine the presence of specific vehicles in images or video sequences, often using vehicle appearance for identification due challenge capturing complete license plate information. Addressing performance issues caused by fog, such as image blur and loss key positional information, this paper introduces a multi-task learning framework incorporating multi-scale fusion defogging method (MsF). This effectively mitigates produce clearer images, which are then processed branch. Additionally, phase attention mechanism is introduced adaptively preserve crucial details. Utilizing advanced artificial intelligence techniques deep algorithms, evaluated on both synthetic real datasets, showing significant improvements mean average precision (mAP)—an increase 2.5% 87.8% dataset 1.4% 84.1% dataset. These enhancements demonstrate method’s superior over semi-supervised joint (SJDL) model, particularly under challenging foggy conditions, thus enhancing accuracy deepening understanding applying frameworks adverse visual environments.

Язык: Английский

Процитировано

1

Synergy of Sight and Semantics: Visual Intention Understanding with CLIP DOI
Qu Yang, Mang Ye, Dacheng Tao

и другие.

Lecture notes in computer science, Год журнала: 2024, Номер unknown, С. 144 - 160

Опубликована: Окт. 31, 2024

Язык: Английский

Процитировано

1

SANet: Selective Aggregation Network for unsupervised object re-identification DOI
Minghui Lin, Jianhua Tang,

Longbin Fu

и другие.

Computer Vision and Image Understanding, Год журнала: 2024, Номер 250, С. 104232 - 104232

Опубликована: Ноя. 15, 2024

Язык: Английский

Процитировано

0