Teaching Neural Networks Using Comic Strips DOI

Guido Camerlingo,

Paolo Fantozzi, Luigi Laura

et al.

Lecture notes in networks and systems, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 10

Published: Jan. 1, 2024

Language: Английский

Evaluating the effectiveness of XAI techniques for encoder-based language models DOI
Melkamu Mersha,

Mesay Gemeda Yigezu,

Jugal Kalita

et al.

Knowledge-Based Systems, Journal Year: 2025, Volume and Issue: 310, P. 113042 - 113042

Published: Jan. 23, 2025

Language: Английский

Citations

1

Machine Learning Explainability as a Service: Service Description and Economics DOI
Paolo Fantozzi, Luigi Laura, Maurizio Naldi

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 244 - 253

Published: Jan. 1, 2025

Language: Английский

Citations

0

The university research assessment dilemma: a decision support system for the next evaluation campaigns DOI
Paolo Fantozzi, Valerio Ficcadenti, Maurizio Naldi

et al.

Scientometrics, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 27, 2025

Language: Английский

Citations

0

Developing a novel Temporal Air-quality Risk Index using LSTM autoencoder: A case study with South Korean air quality data DOI
Hyerim Park,

Wonho Sohn,

Eunjin Kang

et al.

The Science of The Total Environment, Journal Year: 2025, Volume and Issue: 978, P. 179303 - 179303

Published: April 16, 2025

Language: Английский

Citations

0

Large Language Models in Genomics—A Perspective on Personalized Medicine DOI Creative Commons
Shahid Ali, Yazdan Ahmad Qadri, Khurshid Ahmad

et al.

Bioengineering, Journal Year: 2025, Volume and Issue: 12(5), P. 440 - 440

Published: April 23, 2025

Integrating artificial intelligence (AI), particularly large language models (LLMs), into the healthcare industry is revolutionizing field of medicine. LLMs possess capability to analyze scientific literature and genomic data by comprehending producing human-like text. This enhances accuracy, precision, efficiency extensive analyses through contextualization. have made significant advancements in their ability understand complex genetic terminology accurately predict medical outcomes. These capabilities allow for a more thorough understanding influences on health issues creation effective therapies. review emphasizes LLMs’ impact healthcare, evaluates triumphs limitations processing, makes recommendations addressing these order enhance system. It explores latest analysis, focusing enhancing disease diagnosis treatment accuracy taking account an individual’s composition. also anticipates future which AI-driven analysis commonplace clinical practice, suggesting potential research areas. To effectively leverage personalized medicine, it vital actively support innovation across multiple sectors, ensuring that AI developments directly contribute solutions tailored individual patients.

Language: Английский

Citations

0

Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages DOI Creative Commons
Koena Ronny Mabokela, Mpho Primus, Turgay Çelik

et al.

Big Data and Cognitive Computing, Journal Year: 2024, Volume and Issue: 8(11), P. 160 - 160

Published: Nov. 15, 2024

Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities limited data or computational resources, it under-represented in many African languages. While state-of-the-art Afrocentric pre-trained language models (PLMs) have been developed various natural processing (NLP) tasks, their applications eXplainable Artificial Intelligence (XAI) remain largely unexplored. In this study, we propose novel approach that combines PLMs with XAI techniques sentiment analysis. We demonstrate the effectiveness of incorporating attention mechanisms visualization improving transparency, trustworthiness, decision-making capabilities transformer-based when making predictions. To validate our approach, employ SAfriSenti corpus, multilingual dataset South under-resourced languages, perform series experiments. These experiments enable comprehensive evaluations, comparing performance against mainstream PLMs. Our results show Afro-XLMR model outperforms all other models, achieving an average F1-score 71.04% five tested lowest error rate among evaluated models. Additionally, enhance interpretability explainability using Local Interpretable Model-Agnostic Explanations (LIME) Shapley Additive (SHAP). ensure predictions are not only accurate interpretable but also understandable, fostering trust reliability AI-driven NLP technologies, particularly context

Language: Английский

Citations

0

Teaching Neural Networks Using Comic Strips DOI

Guido Camerlingo,

Paolo Fantozzi, Luigi Laura

et al.

Lecture notes in networks and systems, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 10

Published: Jan. 1, 2024

Language: Английский

Citations

0