VR-NeRF: High-Fidelity Virtualized Walkable Spaces DOI Creative Commons
Linning Xu, Vasu Agrawal, William Laney

et al.

Published: Dec. 10, 2023

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed built a custom multi-camera rig to densely capture high fidelity with multi-view dynamic range images unprecedented quality density. extend instant graphics primitives novel perceptual color space learning accurate HDR appearance, efficient mip-mapping mechanism level-of-detail anti-aliasing, while carefully optimizing trade-off between speed. Our multi-GPU renderer enables volume our field at full VR resolution dual 2K$\times$2K 36 Hz on demo machine. demonstrate results challenging datasets, compare method datasets existing baselines. release dataset project website.

Language: Английский

NERF FOR HERITAGE 3D RECONSTRUCTION DOI Creative Commons
G. Mazzacca, Ali Karami,

Simone Rigon

et al.

˜The œinternational archives of the photogrammetry, remote sensing and spatial information sciences/International archives of the photogrammetry, remote sensing and spatial information sciences, Journal Year: 2023, Volume and Issue: XLVIII-M-2-2023, P. 1051 - 1058

Published: June 26, 2023

Abstract. Conventional or learning-based 3D reconstruction methods from images have clearly shown their potential for heritage documentation. Nevertheless, Neural Radiance Field (NeRF) approaches are recently revolutionising the way a scene can be rendered reconstructed in set of oriented images. Therefore paper wants to review some last NeRF applied various cultural datasets collected with smartphone videos, touristic reflex cameras. Firstly several evaluated. It turned out that Instant-NGP and Nerfacto achieved best outcomes, outperforming all other significantly. Successively qualitative quantitative analyses performed on datasets, revealing good performances methods, particular areas uniform texture shining surfaces, as well small lost artefacts. This is sure opening new frontiers documentation, visualization communication purposes digital heritage.

Language: Английский

Citations

29

DisCoScene: Spatially Disentangled Generative Radiance Fields for Controllable 3D-aware Scene Synthesis DOI
Yinghao Xu, Menglei Chai, Zifan Shi

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2023, Volume and Issue: unknown, P. 4402 - 4412

Published: June 1, 2023

Existing 3D-aware image synthesis approaches mainly focus on generating a single canonical object and show limited capacity in composing complex scene containing variety of objects. This work presents DisCoScene: generative model for high-quality controllable synthesis. The key ingredient our method is very abstract object-level representation (i.e., 3D bounding boxes without semantic annotation) as the layout prior, which simple to obtain, general describe various contents, yet informative disentangle objects background. Moreover, it serves an intuitive user control editing. Based such proposed spatially disentangles whole into object-centric radiance fields by learning only 2D images with global-local discrimination. Our obtains generation fidelity editing flexibility individual while being able efficiently compose background complete scene. We demonstrate state-of-the-art performance many datasets, including challenging Waymo outdoor dataset. Project page can be found here.

Language: Английский

Citations

26

Image-based 3D reconstruction for Multi-Scale civil and infrastructure Projects: A review from 2012 to 2022 with new perspective from deep learning methods DOI
Yujie Lu, Shuo Wang,

Sensen Fan

et al.

Advanced Engineering Informatics, Journal Year: 2023, Volume and Issue: 59, P. 102268 - 102268

Published: Nov. 22, 2023

Language: Английский

Citations

24

StegaNeRF: Embedding Invisible Information within Neural Radiance Fields DOI
Chenxin Li, Brandon Y. Feng,

Zhiwen Fan

et al.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Journal Year: 2023, Volume and Issue: unknown, P. 441 - 453

Published: Oct. 1, 2023

Recent advancements in neural rendering have paved the way for a future marked by widespread distribution of visual data through sharing Neural Radiance Field (NeRF) model weights. However, while established techniques exist embedding ownership or copyright information within conventional such as images and videos, challenges posed emerging NeRF format remained unaddressed. In this paper, we introduce StegaNeRF, an innovative approach steganographic renderings. We meticulously developed optimization framework that enables precise retrieval hidden from generated NeRF, ensuring original quality rendered to remain intact. Through rigorous experimentation, assess efficacy our methodology across various potential deployment scenarios. Furthermore, delve into insights gleaned analysis. StegaNeRF represents initial foray intriguing realm infusing renderings with customizable, imperceptible, recoverable information, all minimizing any discernible impact on images. For more details, please visit project page: https://xggnet.github.io/StegaNeRF/

Language: Английский

Citations

23

PLGSLAM: Progressive Neural Scene Represenation with Local to Global Bundle Adjustment DOI
Tianchen Deng,

Guole Shen,

Tong Qin

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2024, Volume and Issue: 34, P. 19657 - 19666

Published: June 16, 2024

Language: Английский

Citations

13

From data to action in flood forecasting leveraging graph neural networks and digital twin visualization DOI Creative Commons

Naghmeh Shafiee Roudbari,

Shubham Rajeev Punekar,

Zachary Patterson

et al.

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Aug. 10, 2024

Forecasting floods encompasses significant complexity due to the nonlinear nature of hydrological systems, which involve intricate interactions among precipitation, landscapes, river and networks. Recent efforts in hydrology have aimed at predicting water flow, floods, quality, yet most methodologies overlook influence adjacent areas lack advanced visualization for level assessment. Our contribution is two-fold: firstly, we introduce a graph neural network model (LocalFLoodNet) equipped with learning module capture interconnections systems connectivity between stations predict future levels. Secondly, develop simulation prototype offering visual insights decision-making disaster prevention policy-making. This visualizes predicted levels facilitates data analysis using decades historical information. Focusing on Greater Montreal Area (GMA), particularly Terrebonne, Quebec, Canada, apply LocalFLoodNet demonstrate comprehensive method assessing flood impacts. By utilizing digital twin our tool allows users interactively modify landscape simulate various scenarios, thereby providing valuable into preventive strategies. research aims enhance prediction evaluation measures, setting benchmark similar applications across different geographic areas.

Language: Английский

Citations

11

Large-Scale 3D Reconstruction from Multi-View Imagery: A Comprehensive Review DOI Creative Commons
Haitao Luo, Jinming Zhang, Xiongfei Liu

et al.

Remote Sensing, Journal Year: 2024, Volume and Issue: 16(5), P. 773 - 773

Published: Feb. 22, 2024

Three-dimensional reconstruction is a key technology employed to represent virtual reality in the real world, which valuable computer vision. Large-scale 3D models have broad application prospects fields of smart cities, navigation, tourism, disaster warning, and search-and-rescue missions. Unfortunately, most image-based studies currently prioritize speed accuracy indoor scenes. While there are some that address large-scale scenes, has been lack systematic comprehensive efforts bring together advancements made field Hence, this paper presents overview technique utilizes multi-view imagery from In article, summary analysis vision-based for scenes presented. The algorithms extensively categorized into traditional learning-based methods. Furthermore, these methods can be based on whether sensor actively illuminates objects with light sources, resulting two categories: active passive Two methods, namely, structured laser scanning, briefly introduced. focus then shifts structure motion (SfM), stereo matching, (MVS), encompassing both approaches. Additionally, novel approach neural-radiance-field-based workflow improvements elaborated upon. Subsequently, well-known datasets evaluation metrics various tasks Lastly, challenges encountered outdoor provided, along predictions future trends development.

Language: Английский

Citations

10

Recursive-NeRF: An Efficient and Dynamically Growing NeRF DOI
Guowei Yang, Wenyang Zhou, Hao-Yang Peng

et al.

IEEE Transactions on Visualization and Computer Graphics, Journal Year: 2022, Volume and Issue: 29(12), P. 5124 - 5136

Published: Oct. 4, 2022

View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability resolution. However, heavy computation required by its volumetric approach prevents NeRF being useful in practice; minutes are taken render single image few megapixels. Now, an scene can be rendered level-of-detail manner, so we posit that complicated region should represented large neural network while small is capable encoding simple region, enabling balance between efficiency quality. Recursive-NeRF our embodiment this idea, providing efficient adaptive rendering training for NeRF. The core learns uncertainties query coordinates, representing predicted color intensity at each level. Only coordinates with forwarded next level bigger more powerful representational capability. final composition results networks all levels. Our evaluation on public datasets large-scale dataset collected shows than state-of-the-art code will available https://github.com/Gword/Recursive-NeRF.

Language: Английский

Citations

31

Interactive Segmentation of Radiance Fields DOI
Rahul Goel,

Dhawal Sirikonda,

Saurabh Saini

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2023, Volume and Issue: unknown, P. 4201 - 4211

Published: June 1, 2023

Radiance Fields (RF) are popular to represent casually-captured scenes for new view synthesis and several applications beyond it. Mixed reality on personal spaces needs understanding manipulating represented as RFs, with semantic segmentation of objects an important step. Prior efforts show promise but don't scale complex diverse appearance. We present the ISRF method interactively segment fine structure Nearest neighbor feature matching using distilled features identifies high-confidence seed regions. Bilateral search in a joint spatio-semantic space grows region recover accurate segmentation. state-of-the-art results segmenting from RFs compositing them another scene, changing appearance, etc., interactive tool that others can use.

Language: Английский

Citations

22

Estimating energy consumption of residential buildings at scale with drive-by image capture DOI Creative Commons
Wil O. C. Ward,

Xinzhou Li,

Yuxi Sun

et al.

Building and Environment, Journal Year: 2023, Volume and Issue: 234, P. 110188 - 110188

Published: March 9, 2023

Data-driven approaches to addressing climate change are increasingly becoming a necessary solution deal with the scope and scale of interventions required reach net zero. In UK, housing contributes over 30% national energy consumption, massive rollout retrofit is needed meet government targets for zero by 2050. This paper introduces modular framework quantifying building features using drive-by image capture utilising them estimate consumption. The demonstrated on case study houses in UK neighbourhood, showing that it can perform comparatively gold standard datasets. reflects modularity proposed framework, potential extensions applications, highlights need robust data collection pursuit efficient, large-scale interventions.

Language: Английский

Citations

19