Future implications of ChatGPT in pharmaceutical industry: drug discovery and development DOI Creative Commons
Ailin Zhao, Yijun Wu

Frontiers in Pharmacology, Journal Year: 2023, Volume and Issue: 14

Published: July 17, 2023

OPINION article Front. Pharmacol., 17 July 2023Sec. Experimental Pharmacology and Drug Discovery Volume 14 - 2023 | https://doi.org/10.3389/fphar.2023.1194216

Language: Английский

A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends DOI Creative Commons

A. Saranya,

R. Subhashini

Decision Analytics Journal, Journal Year: 2023, Volume and Issue: 7, P. 100230 - 100230

Published: April 17, 2023

Artificial Intelligence (AI) uses systems and machines to simulate human intelligence solve common real-world problems. Machine learning deep are technologies that use algorithms predict outcomes more accurately without relying on intervention. However, the opaque black box model cumulative complexity can be used achieve. Explainable (XAI) is a term refers provide explanations for their decision or predictions users. XAI aims increase transparency, trustworthiness accountability of AI system, especially when they high-stakes application such as healthcare, finance security. This paper offers systematic literature review approaches with different observes 91 recently published articles describing development applications in manufacturing, transportation, finance. We investigated Scopus, Web Science, IEEE Xplore PubMed databases, find pertinent publications between January 2018 October 2022. It contains research modelling were retrieved from scholarly databases using keyword searches. think our extends by working roadmap further field.

Language: Английский

Citations

206

Explainable, trustworthy, and ethical machine learning for healthcare: A survey DOI Creative Commons
Khansa Rasheed, Adnan Qayyum, Mohammed Ghaly

et al.

Computers in Biology and Medicine, Journal Year: 2022, Volume and Issue: 149, P. 106043 - 106043

Published: Sept. 7, 2022

With the advent of machine learning (ML) and deep (DL) empowered applications for critical like healthcare, questions about liability, trust, interpretability their outputs are raising. The black-box nature various DL models is a roadblock to clinical utilization. Therefore, gain trust clinicians patients, we need provide explanations decisions models. promise enhancing transparency models, researchers in phase maturing field eXplainable ML (XML). In this paper, provided comprehensive review explainable interpretable techniques healthcare applications. Along with highlighting security, safety, robustness challenges that hinder trustworthiness ML, also discussed ethical issues arising because use ML/DL healthcare. We describe how trustworthy can resolve all these problems. Finally, elaborate on limitations existing approaches highlight open research problems require further development.

Language: Английский

Citations

181

Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations DOI Creative Commons
Anastasiya Kiseleva, Dimitris Kotzinos, Paul De Hert

et al.

Frontiers in Artificial Intelligence, Journal Year: 2022, Volume and Issue: 5

Published: May 30, 2022

The lack of transparency is one the artificial intelligence (AI)'s fundamental challenges, but concept might be even more opaque than AI itself. Researchers in different fields who attempt to provide solutions improve AI's articulate neighboring concepts that include, besides transparency, explainability and interpretability. Yet, there no common taxonomy neither within field (such as data science) nor between (law science). In certain areas like healthcare, requirements are crucial since decisions directly affect people's lives. this paper, we suggest an interdisciplinary vision on how tackle issue propose a single point reference for both legal scholars scientists related concepts. Based analysis European Union (EU) legislation literature computer science, submit shall considered "way thinking" umbrella characterizing process development use. Transparency achieved through set measures such interpretability explainability, communication, auditability, traceability, information provision, record-keeping, governance management, documentation. This approach deal with general nature, always contextualized. By analyzing healthcare context, it viewed system accountabilities involved subjects (AI developers, professionals, patients) distributed at layers (insider, internal, external layers, respectively). transparency-related built-in into existing accountability picture which justifies need investigate relevant frameworks. These frameworks correspond system. requirement informed medical consent correlates layer Medical Devices Framework insider internal layers. We said inform developers what already expected from them regards transparency. also discover gaps legislative concerning fill in.

Language: Английский

Citations

116

Artificial intelligence-driven radiomics study in cancer: the role of feature engineering and modeling DOI Creative Commons
Yuanpeng Zhang, Xinyun Zhang, Yu‐Ting Cheng

et al.

Military Medical Research, Journal Year: 2023, Volume and Issue: 10(1)

Published: May 16, 2023

Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients' anatomy. However, the interpretation of images can be highly subjective and dependent expertise clinicians. Moreover, some potentially useful quantitative information in images, especially that which not visible to naked eye, often ignored during clinical practice. In contrast, radiomics performs high-throughput feature extraction from enables analysis prediction endpoints. Studies have reported exhibits promising performance diagnosis predicting treatment responses prognosis, demonstrating its potential a non-invasive auxiliary tool personalized medicine. remains developmental phase as numerous technical challenges yet solved, engineering statistical modeling. this review, we introduce current utility by summarizing research application diagnosis, patients with cancer. We focus machine learning approaches, selection imbalanced datasets multi-modality fusion Furthermore, stability, reproducibility, interpretability features, generalizability models. Finally, offer possible solutions research.

Language: Английский

Citations

91

BIM-supported automatic energy performance analysis for green building design using explainable machine learning and multi-objective optimization DOI
Yuxuan Shen, Yue Pan

Applied Energy, Journal Year: 2023, Volume and Issue: 333, P. 120575 - 120575

Published: Jan. 5, 2023

Language: Английский

Citations

86

Application of artificial intelligence in medical technologies: A systematic review of main trends DOI Creative Commons

Olga Vl. Bitkina,

Jaehyun Park, Hyun K. Kim

et al.

Digital Health, Journal Year: 2023, Volume and Issue: 9

Published: Jan. 1, 2023

Objective Artificial intelligence (AI) has been increasingly applied in various fields of science and technology. In line with the current research, medicine involves an increasing number artificial technologies. The introduction rapid AI can lead to positive negative effects. This is a multilateral analytical literature review aimed at identifying main branches trends use using medical Methods total sources reviewed n = 89, they are analyzed based on reporting evidence-based guideline PRISMA (Preferred Reporting Items for Systematic Reviews Meta-Analyses) systematic review. Results As result, from initially selected 198 references, 155 references were obtained databases remaining 43 found open internet as direct links publications. Finally, 89 evaluated after exclusion unsuitable duplicated generalized information without focusing users. Conclusions article state prospects future use. findings this will be useful healthcare professionals improving circulation design implementation stage.

Language: Английский

Citations

44

LORIS robustly predicts patient outcomes with immune checkpoint blockade therapy using common clinical, pathologic and genomic features DOI
Tiangen Chang,

Yingying Cao,

Hannah J. Sfreddo

et al.

Nature Cancer, Journal Year: 2024, Volume and Issue: 5(8), P. 1158 - 1175

Published: June 3, 2024

Language: Английский

Citations

33

Designing interpretable ML system to enhance trust in healthcare: A systematic review to proposed responsible clinician-AI-collaboration framework DOI
Elham Nasarian, Roohallah Alizadehsani, U. Rajendra Acharya

et al.

Information Fusion, Journal Year: 2024, Volume and Issue: 108, P. 102412 - 102412

Published: April 6, 2024

Language: Английский

Citations

30

Explainability and Interpretability in Electric Load Forecasting Using Machine Learning Techniques – A Review DOI Creative Commons
Lukas Baur, Konstantin Ditschuneit, Maximilian Schambach

et al.

Energy and AI, Journal Year: 2024, Volume and Issue: 16, P. 100358 - 100358

Published: March 12, 2024

Electric Load Forecasting (ELF) is the central instrument for planning and controlling demand response programs, electricity trading, consumption optimization. Due to increasing automation of these processes, meaningful transparent forecasts become more important. Still, at same time, complexity used machine learning models architectures increases. Because there an interest in interpretable explainable load forecasting methods, this work conducts a literature review present already applied approaches regarding explainability interpretability using Machine Learning. Based on extensive research covering eight publication portals, recurring modeling approaches, trends, techniques are identified clustered by properties achieve forecasts. The results show increase use probabilistic models, methods time series decomposition fuzzy logic addition classically models. Dominant Feature Importance Attention mechanisms. discussion shows that lot knowledge from related field still needs be adapted problems ELF. Compared other applications such as clustering, currently relatively few results, but with trend.

Language: Английский

Citations

28

Evaluating capabilities of large language models: Performance of GPT-4 on surgical knowledge assessments DOI Creative Commons
Brendin R. Beaulieu‐Jones, Margaret T. Berrigan, Sahaj S. Shah

et al.

Surgery, Journal Year: 2024, Volume and Issue: 175(4), P. 936 - 942

Published: Jan. 21, 2024

Language: Английский

Citations

27