
Briefings in Bioinformatics, Journal Year: 2024, Volume and Issue: 25(5)
Published: July 25, 2024
Deep learning applications have had a profound impact on many scientific fields, including functional genomics. models can learn complex interactions between and within omics data; however, interpreting explaining these be challenging. Interpretability is essential not only to help progress our understanding of the biological mechanisms underlying traits diseases but also for establishing trust in model's efficacy healthcare applications. Recognizing this importance, recent years seen development numerous diverse interpretability strategies, making it increasingly difficult navigate field. In review, we present quantitative analysis challenges arising when designing interpretable deep solutions We explore design choices related characteristics genomics data, neural network architectures applied, strategies interpretation. By quantifying current state field with predefined set criteria, find most frequent solutions, highlight exceptional examples, identify unexplored opportunities developing
Language: Английский