A Comprehensive Review of Deep Learning: Architectures, Recent Advances, and Applications DOI Creative Commons
Ibomoiye Domor Mienye, Theo G. Swart

Information, Journal Year: 2024, Volume and Issue: 15(12), P. 755 - 755

Published: Nov. 27, 2024

Deep learning (DL) has become a core component of modern artificial intelligence (AI), driving significant advancements across diverse fields by facilitating the analysis complex systems, from protein folding in biology to molecular discovery chemistry and particle interactions physics. However, field deep is constantly evolving, with recent innovations both architectures applications. Therefore, this paper provides comprehensive review DL advances, covering evolution applications foundational models like convolutional neural networks (CNNs) Recurrent Neural Networks (RNNs), as well such transformers, generative adversarial (GANs), capsule networks, graph (GNNs). Additionally, discusses novel training techniques, including self-supervised learning, federated reinforcement which further enhance capabilities models. By synthesizing developments identifying current challenges, insights into state art future directions research, offering valuable guidance for researchers industry experts.

Language: Английский

A Comprehensive Review of Deep Learning: Architectures, Recent Advances, and Applications DOI Creative Commons
Ibomoiye Domor Mienye, Theo G. Swart

Information, Journal Year: 2024, Volume and Issue: 15(12), P. 755 - 755

Published: Nov. 27, 2024

Deep learning (DL) has become a core component of modern artificial intelligence (AI), driving significant advancements across diverse fields by facilitating the analysis complex systems, from protein folding in biology to molecular discovery chemistry and particle interactions physics. However, field deep is constantly evolving, with recent innovations both architectures applications. Therefore, this paper provides comprehensive review DL advances, covering evolution applications foundational models like convolutional neural networks (CNNs) Recurrent Neural Networks (RNNs), as well such transformers, generative adversarial (GANs), capsule networks, graph (GNNs). Additionally, discusses novel training techniques, including self-supervised learning, federated reinforcement which further enhance capabilities models. By synthesizing developments identifying current challenges, insights into state art future directions research, offering valuable guidance for researchers industry experts.

Language: Английский

Citations

10