Fast prediction of compressor flow field based on a deep attention symmetrical neural network DOI

Yue Wu,

Dun Ba, Juan Du

et al.

Physics of Fluids, Journal Year: 2024, Volume and Issue: 36(11)

Published: Nov. 1, 2024

Accurate and rapid prediction of compressor performance key flow characteristics is critical for digital design, twin modeling, virtual–real interaction. However, the traditional methods obtaining field parameters by solving Navier–Stokes equations are computationally intensive time-consuming. To establish a model in transonic three-stage axial compressor, this study proposes novel data-driven deep attention symmetric neural network fast reconstruction at different blade rows spanwise positions. The integrates vision transformer (ViT) convolutional (SCNN). ViT extracts geometric features from passages. SCNN used deeper extraction input such as boundary conditions coordinates, enabling precise predictions. Results indicate that trained can efficiently accurately reconstruct internal 0.5 s, capturing phenomena separation wake. Compared with numerical simulations, current offers significant advantages computational speed, delivering three-order magnitude speedup compared to fluid dynamics simulations. It shows strong potential engineering applications provides robust support building models turbomachinery fields.

Language: Английский

Multi-depth branch network for efficient image super-resolution DOI

Huiyuan Tian,

Li Zhang, Shijian Li

et al.

Image and Vision Computing, Journal Year: 2024, Volume and Issue: 144, P. 104949 - 104949

Published: Feb. 18, 2024

Language: Английский

Citations

9

Data-driven modeling of unsteady flow based on deep operator network DOI

Heming Bai,

Zhicheng Wang, Xuesen Chu

et al.

Physics of Fluids, Journal Year: 2024, Volume and Issue: 36(6)

Published: June 1, 2024

Time-dependent flow fields are typically generated by a computational fluid dynamics method, which is an extremely time-consuming process. However, the latent relationship between governed Navier–Stokes equations and can be described operator. We therefore train deep operator network (DeepONet) to learn temporal evolution snapshots. Once properly trained, given few consecutive snapshots as input, has great potential generate next snapshot accurately quickly. Using output new iterates process, generating series of successive with little wall time. Specifically, we consider two-dimensional around circular cylinder at Reynolds number 1000 prepare set high-fidelity data using high-order spectral/hp element method ground truth. Although periodic, there many small-scale features in wake that difficult accurately. Furthermore, any discrepancy prediction truth for first easily accumulate during iterative eventually amplifies overall deviations. Therefore, propose two alternative techniques improve training DeepONet. The one enhances feature extraction harnessing “multi-head non-local block.” second refines parameters leveraging local smooth optimization technique. Both prove highly effective reducing cumulative errors, our results outperform those dynamic mode decomposition method.

Language: Английский

Citations

6

Informers for turbulent time series data forecast DOI
Dimitris Drikakis, Ioannis W. Kokkinakis,

Daryl L. X. Fung

et al.

Physics of Fluids, Journal Year: 2025, Volume and Issue: 37(1)

Published: Jan. 1, 2025

Long-sequence time-series forecasting requires deep learning models with high predictive capacity to capture long-range dependencies between inputs and outputs effectively. This study presents a methodology for pressure time series in shock-wave, turbulent boundary layer interaction flows. Pressure signals were extracted below the λ-shock foot six deformed rigid panel surface cases, where low-frequency unsteadiness of shock–boundary is most prominent. The Informer model demonstrated superior performance accurately predicting signals. Comparative numerical experiments revealed that generally outperformed Transformer, as indicated by lower root mean square errors more accurate power spectrum. effectively resolved better matched ground truth's low- mid-frequency content. forecasted accuracy remained robust across all deformation though subtle yet noticeable discrepancies still manifested. was heavily dependent on step size. A size four provided closer match truth deterministic manner, while eight achieved agreement stochastic sense. Larger sizes resulted gradual decline accuracy.

Language: Английский

Citations

0

A multi-scale hybrid attention Swin-transformer-based model for the super-resolution reconstruction of turbulence DOI
Xiuyan Liu, Yufei Zhang, Tingting Guo

et al.

Nonlinear Dynamics, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 17, 2025

Language: Английский

Citations

0

UTransNet: An efficient hybrid architecture of convolutional neural networks and transformer for the approximation of non-uniform steady laminar flow DOI

W. P. Wang,

Tianle Yin, Jing Pang

et al.

Physics of Fluids, Journal Year: 2025, Volume and Issue: 37(3)

Published: March 1, 2025

Computational fluid dynamics (CFD) is crucial in various fields but computationally expensive and time-consuming, largely due to the complex nonlinear partial differential terms that complicate its equations. A data-driven surrogate model integrating Convolutional Neural Networks Transformer, named UTransNet, proposed effectively approximate solutions for a two-dimensional incompressible non-uniform steady laminar flow have traditionally been solved by mesh-dependent numerical methods. The encoder module, based on depthwise separable convolution, extracts local geometric features within region. Subsequently, attention mechanism of Transformer integrates these features, enabling capture global information. Utilizing decoder module constructed deconvolution, restores dimension integration feature extraction perception capabilities enables UTransNet predict velocity pressure more effectively. Experimental results show total mean square error reduced about factor 12 compared with previous works. Also, achieves speedup over 3 orders magnitude CFD solver Central Processing Unit (CPU) or Graphics Unit. Qualitative quantitative analyses reveal high level similarity between predicted ground-truth data.

Language: Английский

Citations

0

Flow field prediction with self-supervised learning and graph transformer: A high performance solution for limited data and complex flow scenarios DOI
Hang Shen, Dan Zhang, Akira Rinoshika

et al.

Physics of Fluids, Journal Year: 2025, Volume and Issue: 37(4)

Published: April 1, 2025

To address the challenges of limited labeled data and insufficient global feature extraction in flow field prediction, this paper proposes a modeling approach that combines self-supervised learning Graph Transformer. The module leverages reconstruction tasks contrastive to fully exploit latent information unlabeled data, thereby enhancing joint capability for local features. Transformer incorporates self-attention mechanism, enabling effective long-range dependencies multiscale features complex fields. Experimental results demonstrate that, under 100% conditions, proposed method reduces root mean squared error achieved by graph convolutional network neural model on cylinder airfoil datasets from 0.970 0.561 0.616 0.305, achieving significant accuracy improvements 36.5% 45.6%, respectively. Under 50% still exhibits outstanding robustness, with RMSEs 0.792 0.390, ablation studies reveal exhibit strong complementarity, optimal performance when jointly employed. Furthermore, mechanism significantly enhances features, demonstrating its effectiveness capturing dependencies. demonstrates superior prediction robustness providing an efficient solution broad application potential.

Language: Английский

Citations

0

Machine learning for modelling unstructured grid data in computational physics: A review DOI
Sibo Cheng, Marc Bocquet, Weiping Ding

et al.

Information Fusion, Journal Year: 2025, Volume and Issue: unknown, P. 103255 - 103255

Published: May 1, 2025

Language: Английский

Citations

0

Self-supervised transformers for turbulent flow time series DOI
Dimitris Drikakis, Ioannis W. Kokkinakis,

Daryl L. X. Fung

et al.

Physics of Fluids, Journal Year: 2024, Volume and Issue: 36(6)

Published: June 1, 2024

There has been a rapid advancement in deep learning models for diverse research fields and, more recently, fluid dynamics. This study presents self-supervised transformers' complex turbulent flow signals across various test problems. Self-supervision aims to leverage the ability extract meaningful representations from sparse time-series data improve transformer model accuracy and computational efficiency. Two high-speed cases are considered: supersonic compression ramp shock-boundary layer interaction over statically deformed surface. Several training scenarios investigated two different configurations. The concern wall pressure fluctuations due their importance aerodynamics, aeroelasticity, noise, acoustic fatigue. results provide insight into transformers, self-supervision, with application time series. architecture is extendable other domains where series essential.

Language: Английский

Citations

3

A variable fidelity approach for predicting aerodynamic wall quantities of hypersonic vehicles using the ConvNeXt Encoder-Decoder framework DOI
Yuxin Yang, Shaobo Yao, Youtao Xue

et al.

Aerospace Science and Technology, Journal Year: 2024, Volume and Issue: unknown, P. 109605 - 109605

Published: Sept. 1, 2024

Language: Английский

Citations

3

Performance comparison of prediction of hydraulic jump length under Multiple neural network models DOI Creative Commons
Ziyuan Xu, Zirui Liu,

Yingzi Peng

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 122888 - 122901

Published: Jan. 1, 2024

Hydraulic jump is a common physical phenomenon in the field of hydraulic engineering. The essence conversion and dissipation large amount energy due to interaction between vortex structures, mainly released form turbulence water waves. This process significantly reduces kinetic flow, thereby mitigating downstream erosion protecting which turn extends their service life. As crucial factor design discharge length influenced by various factors, including flow velocity, upstream depths, riverbed roughness height, Froude number. In this study, we applied dimensional analysis identify key parameters influencing jumps on dataset provided literature.We utilized multi-task learning strategy, incorporating shared feature extraction layer for characteristic modeling within Physics-Informed Neural Networks (PINNs). Furthermore, compared performance PINNs with other data-driven models such as Deep (DNNs), Convolutional (CNNs), Transformers. results demonstrated that these are effective estimating transitions distinguishing steady unsteady processes. Notably, model exhibited better than models, achieving an R 2 score 0.8818, RMSE 4.4627(cm),MAE 3.3784(cm), precision 0.9677 recall test set. These findings significant elucidating characteristics effects providing scientific basis safe operation practical engineering projects.

Language: Английский

Citations

2