Designing interpretable deep learning applications for functional genomics: a quantitative analysis DOI Creative Commons
Arno van Hilten, Sonja Katz, Edoardo Saccenti

et al.

Briefings in Bioinformatics, Journal Year: 2024, Volume and Issue: 25(5)

Published: July 25, 2024

Deep learning applications have had a profound impact on many scientific fields, including functional genomics. models can learn complex interactions between and within omics data; however, interpreting explaining these be challenging. Interpretability is essential not only to help progress our understanding of the biological mechanisms underlying traits diseases but also for establishing trust in model's efficacy healthcare applications. Recognizing this importance, recent years seen development numerous diverse interpretability strategies, making it increasingly difficult navigate field. In review, we present quantitative analysis challenges arising when designing interpretable deep solutions We explore design choices related characteristics genomics data, neural network architectures applied, strategies interpretation. By quantifying current state field with predefined set criteria, find most frequent solutions, highlight exceptional examples, identify unexplored opportunities developing

Language: Английский

iEnhancer-GDM: A Deep Learning Framework Based on Generative Adversarial Network and Multi-head Attention Mechanism to Identify Enhancers and Their Strength DOI
Xiaomei Yang,

Meng Liao,

Bin Ye

et al.

Interdisciplinary Sciences Computational Life Sciences, Journal Year: 2025, Volume and Issue: unknown

Published: May 7, 2025

Language: Английский

Citations

0

Designing interpretable deep learning applications for functional genomics: a quantitative analysis DOI Creative Commons
Arno van Hilten, Sonja Katz, Edoardo Saccenti

et al.

Briefings in Bioinformatics, Journal Year: 2024, Volume and Issue: 25(5)

Published: July 25, 2024

Deep learning applications have had a profound impact on many scientific fields, including functional genomics. models can learn complex interactions between and within omics data; however, interpreting explaining these be challenging. Interpretability is essential not only to help progress our understanding of the biological mechanisms underlying traits diseases but also for establishing trust in model's efficacy healthcare applications. Recognizing this importance, recent years seen development numerous diverse interpretability strategies, making it increasingly difficult navigate field. In review, we present quantitative analysis challenges arising when designing interpretable deep solutions We explore design choices related characteristics genomics data, neural network architectures applied, strategies interpretation. By quantifying current state field with predefined set criteria, find most frequent solutions, highlight exceptional examples, identify unexplored opportunities developing

Language: Английский

Citations

1