UV Gaussians: Joint learning of mesh deformation and Gaussian textures for human avatar modeling DOI
Yujiao Jiang, Qingmin Liao, Xiaoyu Li

и другие.

Knowledge-Based Systems, Год журнала: 2025, Номер unknown, С. 113470 - 113470

Опубликована: Май 1, 2025

Язык: Английский

Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis DOI
Jonathon Luiten, Georgios Kopanas, Bastian Leibe

и другие.

2021 International Conference on 3D Vision (3DV), Год журнала: 2024, Номер 35, С. 800 - 809

Опубликована: Март 18, 2024

We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking all dense elements. follow an analysis-by-synthesis framework, inspired by recent work models scenes as collection 3D Gaussians which are optimized to reconstruct input images via differentiable rendering. To model scenes, we allow move rotate over time while enforcing they have persistent color, opacity, size. By regularizing Gaussians' motion rotation with local-rigidity constraints, show our Dynamic correctly same area physical space time, including space. Dense 6-DOF reconstruction emerges naturally from view synthesis, without requiring any correspondence or flow input. demonstrate large number downstream applications enabled representation, first-person compositional 4D video editing. 1 1. Project Website: dynamic3dgaussians.github.io

Язык: Английский

Процитировано

151

GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians DOI
Shenhan Qian, Tobias Kirschstein, Liam Schoneveld

и другие.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Год журнала: 2024, Номер unknown, С. 20299 - 20309

Опубликована: Июнь 16, 2024

Язык: Английский

Процитировано

42

Animatable Gaussians: Learning Pose-Dependent Gaussian Maps for High-Fidelity Human Avatar Modeling DOI
Zhe Li, Zerong Zheng,

Lizhen Wang

и другие.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Год журнала: 2024, Номер 40, С. 19711 - 19722

Опубликована: Июнь 16, 2024

Язык: Английский

Процитировано

33

State of the Art on Diffusion Models for Visual Computing DOI
Riccardo Pó, Yifan Wang, Vladislav Golyanik

и другие.

Computer Graphics Forum, Год журнала: 2024, Номер 43(2)

Опубликована: Апрель 30, 2024

Abstract The field of visual computing is rapidly advancing due to the emergence generative artificial intelligence (AI), which unlocks unprecedented capabilities for generation, editing, and reconstruction images, videos, 3D scenes. In these domains, diffusion models are AI architecture choice. Within last year alone, literature on diffusion‐based tools applications has seen exponential growth relevant papers published across computer graphics, vision, communities with new works appearing daily arXiv. This rapid makes it difficult keep up all recent developments. goal this state‐of‐the‐art report (STAR) introduce basic mathematical concepts models, implementation details design choices popular Stable Diffusion model, as well overview important aspects tools, including personalization, conditioning, inversion, among others. Moreover, we give a comprehensive growing generation categorized by type generated medium, 2D objects, locomotion, 4D Finally, discuss available datasets, metrics, open challenges, social implications. STAR provides an intuitive starting point explore exciting topic researchers, artists, practitioners alike.

Язык: Английский

Процитировано

30

3D Gaussian Splatting as New Era: A Survey DOI
Ben Fei, Jingyi Xu, Rui Zhang

и другие.

IEEE Transactions on Visualization and Computer Graphics, Год журнала: 2024, Номер unknown, С. 1 - 20

Опубликована: Янв. 1, 2024

3D Gaussian Splatting (3D-GS) has emerged as a significant advancement in the field of Computer Graphics, offering explicit scene representation and novel view synthesis without reliance on neural networks, such Neural Radiance Fields (NeRF). This technique found diverse applications areas robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, just name few. Given growing popularity expanding research Splatting, this paper presents comprehensive survey relevant papers from past year. We organize into taxonomies based characteristics applications, providing an introduction to theoretical underpinnings Splatting. Our goal through is acquaint new researchers with serve valuable reference for seminal works field, inspire future directions, discussed our concluding section.

Язык: Английский

Процитировано

27

TeCH: Text-Guided Reconstruction of Lifelike Clothed Humans DOI
Yangyi Huang, Hongwei Yi, Yuliang Xiu

и другие.

2021 International Conference on 3D Vision (3DV), Год журнала: 2024, Номер 74, С. 1531 - 1542

Опубликована: Март 18, 2024

Despite recent research advancements in reconstructing clothed humans from a single image, accurately restoring the "unseen regions" with high-level details remains an unsolved challenge that lacks attention. Existing methods often generate overly smooth back-side surfaces blurry texture. But how to effectively capture all visual attributes of individual which are sufficient reconstruct unseen areas (e.g. back view)? Motivated by power foundation models, TeCH reconstructs 3D human leveraging 1) descriptive text prompts garments, colors, hairstyles) automatically generated via garment parsing model and Visual Question Answering (VQA), 2) personalized fine-tuned Text-to-Image diffusion (T2I) learns "indescribable" appearance. To represent high-resolution at affordable cost, we propose hybrid representation based on DMTet, consists explicit body shape grid implicit distance field. Guided + T2I model, geometry texture optimized through multi-view Score Distillation Sampling (SDS) reconstruction losses original observation. produces high-fidelity consistent & delicate texture, detailed full-body geometry. Quantitative qualitative experiments demonstrate outperforms state-of-the-art terms accuracy rendering quality. The code will be publicly available for purposes huangyangyi.github.io/TeCH

Язык: Английский

Процитировано

24

ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering DOI

Haokai Pang,

Heming Zhu, Adam Kortylewski

и другие.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Год журнала: 2024, Номер 35, С. 1165 - 1175

Опубликована: Июнь 16, 2024

Язык: Английский

Процитировано

21

Human Gaussian Splatting: Real-Time Rendering of Animatable Avatars DOI

Arthur Moreau,

Jifei Song, Helisa Dhamo

и другие.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Год журнала: 2024, Номер 30, С. 788 - 798

Опубликована: Июнь 16, 2024

Язык: Английский

Процитировано

18

Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis DOI
Zhan Li, Zhang Chen, Zhong Li

и другие.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Год журнала: 2024, Номер 35, С. 8508 - 8520

Опубликована: Июнь 16, 2024

Язык: Английский

Процитировано

17

AvatarVerse: High-Quality & Stable 3D Avatar Creation from Text and Pose DOI Open Access
Huichao Zhang, Bowen Chen, Yang Hao

и другие.

Proceedings of the AAAI Conference on Artificial Intelligence, Год журнала: 2024, Номер 38(7), С. 7124 - 7132

Опубликована: Март 24, 2024

Creating expressive, diverse and high-quality 3D avatars from highly customized text descriptions pose guidance is a challenging task, due to the intricacy of modeling texturing in that ensure details various styles (realistic, fictional, etc). We present AvatarVerse, stable pipeline for generating expressive nothing but guidance. In specific, we introduce 2D diffusion model conditioned on DensePose signal establish control through images, which enhances view consistency partially observed scenarios. It addresses infamous Janus Problem significantly stablizes generation process. Moreover, propose progressive high-resolution synthesis strategy, obtains substantial improvement over quality created avatars. To this end, proposed AvatarVerse achieves zero-shot are not only more also higher fidelity than previous works. Rigorous qualitative evaluations user studies showcase AvatarVerse's superiority synthesizing high-fidelity avatars, leading new standard avatar creation. Our project page is: https://avatarverse3d.github.io/ .

Язык: Английский

Процитировано

16