Research on runoff process vectorization and integration of deep learning algorithms for flood forecasting DOI
Chengshuai Liu, Wenzhong Li, Caihong Hu

et al.

Journal of Environmental Management, Journal Year: 2024, Volume and Issue: 362, P. 121260 - 121260

Published: June 1, 2024

Language: Английский

Differentiable modelling to unify machine learning and physical models for geosciences DOI
Chaopeng Shen, Alison P. Appling, Pierre Gentine

et al.

Nature Reviews Earth & Environment, Journal Year: 2023, Volume and Issue: 4(8), P. 552 - 567

Published: July 11, 2023

Language: Английский

Citations

169

An ensemble CNN-LSTM and GRU adaptive weighting model based improved sparrow search algorithm for predicting runoff using historical meteorological and runoff data as input DOI
Zhiyuan Yao, Zhaocai Wang, Dangwei Wang

et al.

Journal of Hydrology, Journal Year: 2023, Volume and Issue: 625, P. 129977 - 129977

Published: July 22, 2023

Language: Английский

Citations

104

Deep transfer learning based on transformer for flood forecasting in data-sparse basins DOI

Yuanhao Xu,

Kairong Lin,

Caihong Hu

et al.

Journal of Hydrology, Journal Year: 2023, Volume and Issue: 625, P. 129956 - 129956

Published: July 19, 2023

Language: Английский

Citations

65

A Novel Runoff Prediction Model Based on Support Vector Machine and Gate Recurrent unit with Secondary Mode Decomposition DOI
Jinghan Dong, Zhaocai Wang, Tunhua Wu

et al.

Water Resources Management, Journal Year: 2024, Volume and Issue: 38(5), P. 1655 - 1674

Published: Feb. 6, 2024

Language: Английский

Citations

30

Deep learning for cross-region streamflow and flood forecasting at a global scale DOI Creative Commons
Binlan Zhang, Chaojun Ouyang, Peng Cui

et al.

The Innovation, Journal Year: 2024, Volume and Issue: 5(3), P. 100617 - 100617

Published: March 26, 2024

Language: Английский

Citations

28

An interpretable hybrid deep learning model for flood forecasting based on Transformer and LSTM DOI Creative Commons
Wenzhong Li,

Chengshuai Liu,

Yingying Xu

et al.

Journal of Hydrology Regional Studies, Journal Year: 2024, Volume and Issue: 54, P. 101873 - 101873

Published: June 27, 2024

Language: Английский

Citations

20

Ensemble learning using multivariate variational mode decomposition based on the Transformer for multi-step-ahead streamflow forecasting DOI

Jinjie Fang,

Linshan Yang,

Xiaohu Wen

et al.

Journal of Hydrology, Journal Year: 2024, Volume and Issue: 636, P. 131275 - 131275

Published: May 7, 2024

Language: Английский

Citations

17

TLT: Recurrent fine-tuning transfer learning for water quality long-term prediction DOI
Peng Lin,

Huan Wu,

Min Gao

et al.

Water Research, Journal Year: 2022, Volume and Issue: 225, P. 119171 - 119171

Published: Sept. 29, 2022

Language: Английский

Citations

52

Runoff predictions in new-gauged basins using two transformer-based models DOI
Hanlin Yin, Wu Zhu, Xiuwei Zhang

et al.

Journal of Hydrology, Journal Year: 2023, Volume and Issue: 622, P. 129684 - 129684

Published: May 18, 2023

Language: Английский

Citations

31

Evaluation of Transformer model and Self-Attention mechanism in the Yangtze River basin runoff prediction DOI Creative Commons

Xikun Wei,

Guojie Wang, Britta Schmalz

et al.

Journal of Hydrology Regional Studies, Journal Year: 2023, Volume and Issue: 47, P. 101438 - 101438

Published: June 1, 2023

In the Yangtze River basin of China. We applied a recently popular deep learning (DL) algorithm, Transformer (TSF), and two commonly used DL methods, Long-Short-Term Memory (LSTM) Gated Recurrent Unit (GRU), to evaluate performance TSF in predicting runoff basin. also add main structure TSF, Self-Attention (SA), LSTM GRU models, namely LSTM-SA GRU-SA, investigate whether inclusion SA mechanism can improve prediction capability. Seven climatic observations (mean temperature, maximum precipitation, etc.) are input data our study. The whole dataset was divided into training, validation test datasets. addition, we investigated relationship between model time steps. Our experimental results show that has best with fewest parameters while worst due lack sufficient data. models better than for when training samples limited (such as being ten times larger samples). Furthermore, improves accuracy added structures. Different steps (5 d, 10 15 20 25 d 30 d) train different lengths understand their performance, showing an appropriate step significantly performance.

Language: Английский

Citations

29