VR-NeRF: High-Fidelity Virtualized Walkable Spaces DOI Creative Commons
Linning Xu, Vasu Agrawal, William Laney

et al.

Published: Dec. 10, 2023

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed built a custom multi-camera rig to densely capture high fidelity with multi-view dynamic range images unprecedented quality density. extend instant graphics primitives novel perceptual color space learning accurate HDR appearance, efficient mip-mapping mechanism level-of-detail anti-aliasing, while carefully optimizing trade-off between speed. Our multi-GPU renderer enables volume our field at full VR resolution dual 2K$\times$2K 36 Hz on demo machine. demonstrate results challenging datasets, compare method datasets existing baselines. release dataset project website.

Language: Английский

PERF: Panoramic Neural Radiance Field From a Single Panorama DOI
Guangcong Wang, Peng Wang, Zhaoxi Chen

et al.

IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal Year: 2024, Volume and Issue: 46(10), P. 6905 - 6918

Published: April 10, 2024

Neural Radiance Field (NeRF) has achieved substantial progress in novel view synthesis given multi-view images. Recently, some works have attempted to train a NeRF from single image with 3D priors. They mainly focus on limited field of few occlusions, which greatly limits their scalability real-world 360-degree panoramic scenarios large-size occlusions. In this paper, we present PERF , framework that trains neural radiance panorama. Notably, PERF allows roaming complex scene without expensive and tedious collection. To achieve goal, propose collaborative RGBD inpainting method progressive inpainting-and-erasing lift up 2D scene. Specifically, first predict depth map as initialization panorama reconstruct visible regions volume rendering. Then introduce approach into for completing RGB images maps random views, is derived an Stable Diffusion model monocular estimator. Finally, strategy avoid inconsistent geometry between newly-sampled reference views. The two components are integrated the learning NeRFs unified optimization promising results. Extensive experiments Replica new dataset PERF-in-the-wild demonstrate superiority our over state-of-the-art methods. Our can be widely used applications, such panorama-to-3D, text-to-3D, stylization applications. Project page code available at https://github.com/perf-project/PeRF .

Language: Английский

Citations

8

CityGaussian: Real-Time High-Quality Large-Scale Scene Rendering with Gaussians DOI
Yang Liu, Chuanchen Luo, Lue Fan

et al.

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 265 - 282

Published: Oct. 28, 2024

Language: Английский

Citations

8

FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline DOI
Chien‐Yu Lin,

Qichen Fu,

Thomas Merth

et al.

2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Journal Year: 2024, Volume and Issue: unknown, P. 6024 - 6033

Published: Jan. 3, 2024

Super-resolution (SR) techniques have recently been proposed to upscale the outputs of neural radiance fields (NeRF) and generate high-quality images with enhanced inference speeds. However, existing NeRF+SR methods increase training overhead by using extra input features, loss functions, and/or expensive procedures such as knowledge distillation. In this paper, we aim leverage SR for efficiency gains without costly or architectural changes. Specifically, build a simple pipeline that directly combines modules, propose lightweight augmentation technique, random patch sampling, training. Compared methods, our mitigates computing can be trained up 23× faster, making it feasible run on consumer devices Apple MacBook. Experiments show NeRF 2-4× while maintaining high quality, increasing speeds 18× an NVIDIA V100 GPU 12.8× M1 Pro chip. We conclude but effective technique improving models devices.

Language: Английский

Citations

7

MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering and Beyond DOI
Yixuan Li, Lihan Jiang, Linning Xu

et al.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Journal Year: 2023, Volume and Issue: unknown, P. 3182 - 3192

Published: Oct. 1, 2023

Neural radiance fields (NeRF) and its subsequent variants have led to remarkable progress in neural rendering. While most of recent rendering works focus on objects small-scale scenes, developing methods for city-scale scenes is great potential many real-world applications. However, this line research impeded by the absence a comprehensive high-quality dataset, yet collecting such dataset over real costly, sensitive, technically infeasible. To end, we build large-scale, comprehensive, synthetic researches. Leveraging Unreal Engine 5 City Sample project, developed pipeline easily collect aerial street city views, accompanied ground-truth camera poses range additional data modalities. Flexible controls environmental factors like light, weather, human car crowd are also available our pipeline, supporting need various tasks covering beyond. The resulting pilot MatrixCity, contains 67k images 452k from two maps total size 28km 2 . On top thorough benchmark conducted, which not only reveals unique challenges task rendering, but highlights improvements future works. code will be publicly at project page: https://city-super.github.io/matrixcity/.

Language: Английский

Citations

16

VR-NeRF: High-Fidelity Virtualized Walkable Spaces DOI Creative Commons
Linning Xu, Vasu Agrawal, William Laney

et al.

Published: Dec. 10, 2023

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed built a custom multi-camera rig to densely capture high fidelity with multi-view dynamic range images unprecedented quality density. extend instant graphics primitives novel perceptual color space learning accurate HDR appearance, efficient mip-mapping mechanism level-of-detail anti-aliasing, while carefully optimizing trade-off between speed. Our multi-GPU renderer enables volume our field at full VR resolution dual 2K$\times$2K 36 Hz on demo machine. demonstrate results challenging datasets, compare method datasets existing baselines. release dataset project website.

Language: Английский

Citations

14