Evaluation of Deep Neural Operator Models Toward Ocean Forecasting DOI

Ellery Rajagopal,

Anantha Babu,

Tony Ryu

et al.

Published: Sept. 25, 2023

Data-driven, deep-learning modeling frameworks have been recently developed for forecasting time series data. Such machine learning models may be useful in multiple domains including the atmospheric and oceanic ones, general, larger fluids community. The present work investigates possible effectiveness of such deep neural operator reproducing predicting classic fluid flows simulations realistic ocean dynamics. We first briefly evaluate capabilities when trained on a simulated two-dimensional flow past cylinder. then investigate their application to surface circulation Middle Atlantic Bight Massachusetts Bay, from high-resolution data-assimilative employed real sea experiments. confirm that are capable idealized periodic eddy shedding. For our preliminary study, they can predict several features show some skill, providing potential future research applications.

Language: Английский

Promising directions of machine learning for partial differential equations DOI
Steven L. Brunton,

J. Nathan Kutz

Nature Computational Science, Journal Year: 2024, Volume and Issue: 4(7), P. 483 - 494

Published: June 28, 2024

Language: Английский

Citations

20

A mathematical guide to operator learning DOI
Nicolas Boullé, Alex Townsend

Handbook of numerical analysis, Journal Year: 2024, Volume and Issue: unknown, P. 83 - 125

Published: Jan. 1, 2024

Language: Английский

Citations

12

U-DeepONet: U-Net enhanced deep operator network for geologic carbon sequestration DOI Creative Commons
Waleed Diab, Mohammed Al Kobaisi

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Sept. 12, 2024

Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) a modular operator learning framework that allows flexibility in choosing the kind of network to be used trunk and/or branch DeepONet. This beneficial as it has been shown many times different types problems require kinds architectures effective learning. In this work, we design efficient based on DeepONet architecture. We introduce U-Net enhanced (U-DeepONet) solution highly complex CO

Language: Английский

Citations

11

Multi-fidelity Fourier neural operator for fast modeling of large-scale geological carbon storage DOI Creative Commons
Hewei Tang, Qingkai Kong, Joseph P. Morris

et al.

Journal of Hydrology, Journal Year: 2024, Volume and Issue: 629, P. 130641 - 130641

Published: Jan. 14, 2024

Language: Английский

Citations

10

Enhancing complex Fourier characterization for temperature field reconstruction via multi-scale modulation and demodulation DOI
Ruofan Zhang, Xingchen Li, Ning Wang

et al.

International Journal of Thermal Sciences, Journal Year: 2025, Volume and Issue: 211, P. 109694 - 109694

Published: Jan. 15, 2025

Language: Английский

Citations

1

Enhancing convergence speed with feature enforcing physics-informed neural networks using boundary conditions as prior knowledge DOI Creative Commons

Mahyar Jahaninasab,

Mohamad Ali Bijarchi

Scientific Reports, Journal Year: 2024, Volume and Issue: 14(1)

Published: Oct. 11, 2024

This research introduces an accelerated training approach for Vanilla Physics-Informed Neural Networks (PINNs) that addresses three factors affecting the loss function: initial weight state of neural network, ratio domain to boundary points, and weighting factor. The proposed method involves two phases. In phase, a unique function is created using subset conditions partial differential equation terms. Furthermore, we introduce preprocessing procedures aim decrease variance during initialization choose points according various networks. second phase resembles Vanilla-PINN training, but portion random weights are substituted with from first phase. implies network's structure designed prioritize conditions, subsequently overall convergence. study evaluates benchmarks: two-dimensional flow over cylinder, inverse problem inlet velocity determination, Burger equation. Incorporating generated in neutralizes imbalance effects. Notably, outperforms terms speed, convergence likelihood eliminates need hyperparameter tuning balance function.

Language: Английский

Citations

7

Variable-fidelity surrogate model based on transfer learning and its application in multidisciplinary design optimization of aircraft DOI Open Access
Jun-xue Leng, Yuan Feng, Wei Huang

et al.

Physics of Fluids, Journal Year: 2024, Volume and Issue: 36(1)

Published: Jan. 1, 2024

Variable-fidelity surrogate models leverage low-fidelity data with low cost to assist in constructing high-precision models, thereby improving modeling efficiency. However, traditional machine learning methods require high correlation between low-precision and data. To address this issue, a variable-fidelity deep neural network model based on transfer (VDNN-TL) is proposed. VDNN-TL selects retains information encapsulated different fidelity through layers, reducing the model's demand for enhancing robustness. Two case studies are used simulate scenarios poor correlation, predictive accuracy of compared that (e.g., Kriging Co-Kriging). The obtained results demonstrate that, under same cost, achieves higher accuracy. Furthermore, waverider shape multidisciplinary design optimization practice, application improves efficiency by 98.9%. After optimization, lift-to-drag ratio increases 7.86%, volume 26.2%. Moreover, performance evaluation error both initial optimized configurations less than 2%, further validating effectiveness VDNN-TL.

Language: Английский

Citations

6

Prediction of turbulent channel flow using Fourier neural operator-based machine-learning strategy DOI
Yunpeng Wang, Zhijie Li, Zelong Yuan

et al.

Physical Review Fluids, Journal Year: 2024, Volume and Issue: 9(8)

Published: Aug. 12, 2024

The implicit U-Net enhanced Fourier neural operator (IUFNO) combines the loop structure of FNO (IFNO) with U-Net, leading to long-term predictive ability in large-eddy simulations (LES) turbulent channel flow. It is found that IUFNO outperforms traditional dynamic Smagorinsky model (DSM) and wall-adapted local eddy-viscosity (WALE) at coarse LES grids. predictions both mean fluctuating quantities by are closer filtered direct numerical simulation (fDNS) benchmark compared models, while computational cost much lower.

Language: Английский

Citations

6

In-context operator learning with data prompts for differential equation problems DOI Creative Commons
Liu Yang,

Siting Liu,

Tingwei Meng

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2023, Volume and Issue: 120(39)

Published: Sept. 19, 2023

This paper introduces the paradigm of “in-context operator learning” and corresponding model “In-Context Operator Networks” to simultaneously learn operators from prompted data apply it new questions during inference stage, without any weight update. Existing methods are limited using a neural network approximate specific equation solution or operator, requiring retraining when switching problem with different equations. By training single as an learner, rather than solution/operator approximator, we can not only get rid (even fine-tuning) for problems but also leverage commonalities shared across so that few examples in prompt needed learning operator. Our numerical results show capability few-shot learner diversified type differential problems, including forward inverse ordinary equations, partial mean-field control generalize its beyond distribution.

Language: Английский

Citations

12

Inductive transfer-learning of high-fidelity aerodynamics from inviscid panel methods DOI Creative Commons
Benjamin Wong, Boo Cheong Khoo

Advances in Aerodynamics, Journal Year: 2025, Volume and Issue: 7(1)

Published: Jan. 9, 2025

Abstract Building accurate and generalizable machine-learning models requires large training datasets. In aerodynamics, quantities of interest are typically governed by complex, non-linear mechanisms in which neural networks well-suited to address. However, the acquisition large, high-fidelity datasets from either simulations or experiments can be expensive. this work, a transfer-learning framework is explored reduce reliance on these expensive exploiting cost-effectiveness low-fidelity analyses constructing extensive datasets, such as inviscid panel method. By first developing robust base distributions, target “learn” simply transferring relevant embedded features facilitate modelling instead solely relying its access samples. Assessment reveals performance gains over conventional schemes (1) fidelity enhancement pressure distributions; (2) generalizing prior knowledge learn adjacent skin friction properties even without equivalent; (3) extrapolation yet-to-be seen operating conditions. Under conditions limited samples, test MSE evaluations improved magnitudes up 10 2 , 1 for three respective tasks. As such, findings motivate further investigations support data-scarce surrogate more empirical settings.

Language: Английский

Citations

0