Advancements and implications of artificial intelligence for early detection, diagnosis and tailored treatment of cancer DOI
Sonia Chadha, Sayali Mukherjee, Somali Sanyal

и другие.

Seminars in Oncology, Год журнала: 2025, Номер 52(3), С. 152349 - 152349

Опубликована: Май 8, 2025

Язык: Английский

Advanced insights through systematic analysis: Mapping future research directions and opportunities for xAI in deep learning and artificial intelligence used in cybersecurity DOI Creative Commons
Marek Pawlicki, Aleksandra Pawlicka, Rafał Kozik

и другие.

Neurocomputing, Год журнала: 2024, Номер 590, С. 127759 - 127759

Опубликована: Апрель 25, 2024

This paper engages in a comprehensive investigation concerning the application of Explainable Artificial Intelligence (xAI) within context deep learning and Intelligence, with specific focus on its implications for cybersecurity. Firstly, gives an overview xAI techniques their significance benefits when applied Subsequently, authors methodically delineate systematic mapping study, which serves as investigative tool discerning potential trajectory field. strategic methodological framework lets one identify future research directions opportunities that underlie integration realm Deep Learning, cybersecurity, are described in-depth. Then, brings together all gathered insights from this extensive closes final conclusions.

Язык: Английский

Процитировано

15

Predictors of Healthcare Practitioners' Intention to Use AI-Enabled Clinical Decision Support Systems (AI-CDSSs): A Meta-Analysis Based on the Unified Theory of Acceptance and Use of Technology (UTAUT) (Preprint) DOI Creative Commons

Julius Dingel,

Anne‐Kathrin Kleine, Julia Cecil

и другие.

Journal of Medical Internet Research, Год журнала: 2024, Номер unknown

Опубликована: Фев. 15, 2024

Artificial intelligence-enabled clinical decision support systems (AI-CDSSs) offer potential for improving health care outcomes, but their adoption among practitioners remains limited.

Язык: Английский

Процитировано

12

Explainable deep learning approach for advanced persistent threats (APTs) detection in cybersecurity: a review DOI Creative Commons

Noor Hazlina Abdul Mutalib,

Aznul Qalid Md Sabri, Ainuddin Wahid Abdul Wahab

и другие.

Artificial Intelligence Review, Год журнала: 2024, Номер 57(11)

Опубликована: Сен. 18, 2024

Abstract In recent years, Advanced Persistent Threat (APT) attacks on network systems have increased through sophisticated fraud tactics. Traditional Intrusion Detection Systems (IDSs) suffer from low detection accuracy, high false-positive rates, and difficulty identifying unknown such as remote-to-local (R2L) user-to-root (U2R) attacks. This paper addresses these challenges by providing a foundational discussion of APTs the limitations existing methods. It then pivots to explore novel integration deep learning techniques Explainable Artificial Intelligence (XAI) improve APT detection. aims fill gaps in current research thorough analysis how XAI methods, Shapley Additive Explanations (SHAP) Local Interpretable Model-agnostic (LIME), can make black-box models more transparent interpretable. The objective is demonstrate necessity explainability propose solutions that enhance trustworthiness effectiveness models. offers critical approaches, highlights their strengths limitations, identifies open issues require further research. also suggests future directions combat evolving threats, paving way for effective reliable cybersecurity solutions. Overall, this emphasizes importance enhancing performance systems.

Язык: Английский

Процитировано

10

Eight quick tips for biologically and medically informed machine learning DOI Creative Commons
Luca Oneto, Davide Chicco

PLoS Computational Biology, Год журнала: 2025, Номер 21(1), С. e1012711 - e1012711

Опубликована: Янв. 9, 2025

Machine learning has become a powerful tool for computational analysis in the biomedical sciences, with its effectiveness significantly enhanced by integrating domain-specific knowledge. This integration give rise to informed machine learning, contrast studies that lack domain knowledge and treat all variables equally (uninformed learning). While application of bioinformatics health informatics datasets more seamless, likelihood errors also increased. To address this drawback, we present eight guidelines outlining best practices employing methods sciences. These quick tips offer recommendations on various aspects analysis, aiming assist researchers generating robust, explainable, dependable results. Even if originally crafted these simple suggestions novices, believe they are deemed relevant expert as well.

Язык: Английский

Процитировано

2

Artificial Intelligence and Non-Destructive Testing Data to Assess Concrete Sustainability of Civil Engineering Infrastructures DOI Open Access

Cédric Baudrit,

Sylvain Dufau,

Géraldine Villain

и другие.

Materials, Год журнала: 2025, Номер 18(4), С. 826 - 826

Опубликована: Фев. 13, 2025

The sustainable development and preservation of natural resources have highlighted the critical need for effective maintenance civil engineering infrastructures. Recent advancements in technology data digitization enable acquisition from sensors on structures like bridges, tunnels, energy production facilities. This paper explores "smart" uses these to optimize actions through interdisciplinary approaches, integrating artificial intelligence engineering. Corrosion, a key factor affecting infrastructure health, underscores robust predictive models. Supervised Machine Learning regression methods, particularly Random Forest (RF) Artificial Neural Networks (ANNs), are investigated predicting structural properties based Non-Destructive Testing (NDT) data. dataset includes various measurements such as ultrasonic, electromagnetic, electrical concrete samples. study compares performances RF ANN characteristics, compressive strength, elastic modulus, porosity, density, saturation rate. results show that, while both models exhibit strong capabilities, generally outperforms most metrics. Additionally, SHapley Additive exPlanation (SHAP) provides insights into model decisions, ensuring transparency interpretability. research emphasizes potential with empirical mechanical methods enhance maintenance, providing comprehensive framework future applications.

Язык: Английский

Процитировано

2

FAIR and Beyond: Evolving Principles for Modern Data Ecosystems DOI

M. A. Bashir

Lecture notes in networks and systems, Год журнала: 2025, Номер unknown, С. 170 - 189

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

1

Enhancing interpretability and accuracy of AI models in healthcare: a comprehensive review on challenges and future directions DOI Creative Commons
Mohammad Ennab, Hamid Mcheick

Frontiers in Robotics and AI, Год журнала: 2024, Номер 11

Опубликована: Ноя. 28, 2024

Artificial Intelligence (AI) has demonstrated exceptional performance in automating critical healthcare tasks, such as diagnostic imaging analysis and predictive modeling, often surpassing human capabilities. The integration of AI promises substantial improvements patient outcomes, including faster diagnosis personalized treatment plans. However, models frequently lack interpretability, leading to significant challenges concerning their generalizability across diverse populations. These opaque technologies raise serious safety concerns, non-interpretable can result improper decisions due misinterpretations by providers. Our systematic review explores various applications healthcare, focusing on the assessment model interpretability accuracy. We identify elucidate most limitations current systems, black-box nature deep learning variability different clinical settings. By addressing these challenges, our objective is provide providers with well-informed strategies develop innovative safe solutions. This aims ensure that future implementations not only enhance but also maintain transparency safety.

Язык: Английский

Процитировано

6

Explainable Artificial Intelligence in Hydrology: Interpreting Black-Box Snowmelt-Driven Streamflow Predictions in an Arid Andean Basin of North-Central Chile DOI Open Access
Jorge Núñez,

Catalina B. Cortés,

Marjorie A. Yáñez

и другие.

Water, Год журнала: 2023, Номер 15(19), С. 3369 - 3369

Опубликована: Сен. 26, 2023

In recent years, a new discipline known as Explainable Artificial Intelligence (XAI) has emerged, which followed the growing trend experienced by over last decades. There are, however, important gaps in adoption of XAI hydrology research, terms application studies southern hemisphere, or associated with snowmelt-driven streamflow prediction arid regions, to mention few. This paper seeks contribute filling these knowledge through techniques basin located region north-central Chile South America. For this, two models were built using Random Forest algorithm, for one and four months advance. The show good performance training set (RMSE:1.33, R2: 0.94, MAE:0.55) (RMSE: 5.67, R2:0.94, MAE: 1.51) selected interpretation (importance variable, partial dependence plot, accumulated local effects Shapley values interpretable model-agnostic explanations) that hydrometeorological variables vicinity are more than climate this occurs both dataset level lowest records. importance approach adopted study is discussed its contribution understanding hydrological processes, well role high-stakes decision-making.

Язык: Английский

Процитировано

12

Evolving Feature Selection: Synergistic Backward and Forward Deletion Method Utilizing Global Feature Importance DOI Creative Commons
Takafumi Nakanishi, Ponlawat Chophuk, Krisana Chinnasarn

и другие.

IEEE Access, Год журнала: 2024, Номер 12, С. 88696 - 88714

Опубликована: Янв. 1, 2024

Язык: Английский

Процитировано

4

Unveiling the Depths of Explainable AI DOI
Wasim Khan, Mohammad Ishrat

Advances in systems analysis, software engineering, and high performance computing book series, Год журнала: 2024, Номер unknown, С. 78 - 106

Опубликована: Март 18, 2024

Explainable AI (XAI) has become increasingly important in the fast-evolving field of and ML. The complexity obscurity AI, especially context deep learning, provide unique issues that are explored this chapter. While learning shown impressive performance, it been criticised for its opaque reasoning. fundamental motivation behind research was to compile a comprehensive cutting-edge survey XAI methods applicable wide variety fields. This review is achieved through meticulous examination analysis various methodologies techniques employed XAI, along with their ramifications within specific application contexts. In addition highlighting existing state authors recognize imperative continuous advancement by delving into limitations inherent current methods. Furthermore, they offer succinct glimpse future trajectory research, emphasizing emerging avenues promising directions poised significant progress.

Язык: Английский

Процитировано

3