Unified Multi-Modal Multi-Agent Cooperative Perception Framework for Intelligent Transportation Systems DOI
Zonglin Meng, Xin Xia,

Zhaoliang Zheng

et al.

SAE technical papers on CD-ROM/SAE technical paper series, Journal Year: 2024, Volume and Issue: 1

Published: Dec. 13, 2024

<div class="section abstract"><div class="htmlview paragraph">Cooperative perception has attracted wide attention given its capability to leverage shared information across connected automated vehicles (CAVs) and smart infrastructure address the occlusion sensing range limitation issues. To date, existing research is mainly focused on prototyping cooperative using only one type of sensor such as LiDAR camera. In cases, performance constrained by individual limitations. exploit multi-modality sensors further improve distant object detection accuracy, in this paper, we propose a unified multi-modal multi-agent framework that integrates camera data enhance intelligent transportation systems. By leveraging complementary strengths sensors, our utilizes geometry from semantic cameras achieve an accurate system. order fuse features, use bird’s-eye view (BEV) space consistent feature representations employ transformer-based network for effective BEV fusion. We validate method OPV2V V2XSet benchmarks, achieving state-of-the-art 3D tasks. The proposed significantly improves accuracy robustness, especially complex traffic scenarios with occlusions dense intersections.</div></div>

Language: Английский

Designing Hybrid Nested GAN with Block Attention Mechanisms for Accurate Crowd Density Mapping and Fake Image Detection Using Remote Sensor Imaging DOI

B. Ganga,

B T Lata,

K. R. Venugopal

et al.

Remote Sensing in Earth Systems Sciences, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 19, 2025

Language: Английский

Citations

1

Unified Multi-Modal Multi-Agent Cooperative Perception Framework for Intelligent Transportation Systems DOI
Zonglin Meng, Xin Xia,

Zhaoliang Zheng

et al.

SAE technical papers on CD-ROM/SAE technical paper series, Journal Year: 2024, Volume and Issue: 1

Published: Dec. 13, 2024

<div class="section abstract"><div class="htmlview paragraph">Cooperative perception has attracted wide attention given its capability to leverage shared information across connected automated vehicles (CAVs) and smart infrastructure address the occlusion sensing range limitation issues. To date, existing research is mainly focused on prototyping cooperative using only one type of sensor such as LiDAR camera. In cases, performance constrained by individual limitations. exploit multi-modality sensors further improve distant object detection accuracy, in this paper, we propose a unified multi-modal multi-agent framework that integrates camera data enhance intelligent transportation systems. By leveraging complementary strengths sensors, our utilizes geometry from semantic cameras achieve an accurate system. order fuse features, use bird’s-eye view (BEV) space consistent feature representations employ transformer-based network for effective BEV fusion. We validate method OPV2V V2XSet benchmarks, achieving state-of-the-art 3D tasks. The proposed significantly improves accuracy robustness, especially complex traffic scenarios with occlusions dense intersections.</div></div>

Language: Английский

Citations

0