Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence DOI Creative Commons
Carlo Metta, Andrea Beretta, Roberto Pellungrini

и другие.

Bioengineering, Год журнала: 2024, Номер 11(4), С. 369 - 369

Опубликована: Апрель 12, 2024

This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes critical role interpretability transparency in AI systems for diagnosing diseases, predicting patient outcomes, creating personalized treatment plans. While acknowledging complexities inherent trade-offs between model performance, our work underscores significance XAI methods enhancing decision-making processes healthcare. By providing granular, case-specific insights, like LORE enhance physicians’ patients’ understanding machine learning models their outcome. Our reviews significant contributions to healthcare, highlighting its potential improve clinical decision making, ensure fairness, comply with regulatory standards.

Язык: Английский

Incorporation of “Artificial Intelligence” for Objective Pain Assessment: A Comprehensive Review DOI Creative Commons
Salah N. El-Tallawy, Joseph V. Pergolizzi, Ingrid Vasiliu-Feltes

и другие.

Pain and Therapy, Год журнала: 2024, Номер 13(3), С. 293 - 317

Опубликована: Март 2, 2024

Pain is a significant health issue, and pain assessment essential for proper diagnosis, follow-up, effective management of pain. The conventional methods often suffer from subjectivity variability. main issue to understand better how people experience In recent years, artificial intelligence (AI) has been playing growing role in improving clinical diagnosis decision-making. application AI offers promising opportunities improve the accuracy efficiency assessment. This review article provides an overview current state explores its potential accuracy, efficiency, personalized care. By examining existing literature, research gaps, future directions, this aims guide further advancements field management. An online database search was conducted via multiple websites identify relevant articles. inclusion criteria were English articles published between January 2014 2024). Articles that available as full text trials, observational studies, articles, systemic reviews, meta-analyses included review. exclusion not language, free text, those involving pediatric patients, case reports, editorials. A total (47) conclusion, could present solutions can potentially increase precision, objective

Язык: Английский

Процитировано

22

Explainable AI in Healthcare Application DOI
Siva Raja Sindiramutty, Wee Jing Tee, Sumathi Balakrishnan

и другие.

Advances in computational intelligence and robotics book series, Год журнала: 2024, Номер unknown, С. 123 - 176

Опубликована: Янв. 18, 2024

Given the inherent risks in medical decision-making, professionals carefully evaluate a patient's symptoms before arriving at plausible diagnosis. For AI to be widely accepted and useful technology, it must replicate human judgment interpretation abilities. XAI attempts describe data underlying black-box approach of deep learning (DL), machine (ML), natural language processing (NLP) that explain how judgments are made. This chapter provides survey most recent methods employed imaging related fields, categorizes lists types XAI, highlights used make topics more interpretable. Additionally, focuses on challenging issues applications guides development better deep-learning system explanations by applying principles analysis pictures text.

Язык: Английский

Процитировано

18

Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction DOI
Melkamu Mersha, Khang Nhứt Lâm, Joseph Wood

и другие.

Neurocomputing, Год журнала: 2024, Номер 599, С. 128111 - 128111

Опубликована: Сен. 1, 2024

Язык: Английский

Процитировано

18

From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer DOI Creative Commons
Satvik Tripathi, Azadeh Tabari, Arian Mansur

и другие.

Diagnostics, Год журнала: 2024, Номер 14(2), С. 174 - 174

Опубликована: Янв. 12, 2024

Pancreatic cancer is a highly aggressive and difficult-to-detect with poor prognosis. Late diagnosis common due to lack of early symptoms, specific markers, the challenging location pancreas. Imaging technologies have improved diagnosis, but there still room for improvement in standardizing guidelines. Biopsies histopathological analysis are tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving treatment, patient care. AI algorithms can analyze medical images precision, aiding disease detection. also plays role personalized medicine analyzing data tailor treatment plans. It streamlines administrative tasks, such as coding documentation, provides assistance through chatbots. However, challenges include privacy, security, ethical considerations. This review article focuses on potential transforming pancreatic care, offering diagnostics, treatments, operational efficiency, leading better outcomes.

Язык: Английский

Процитировано

17

Advancements in MRI-Based Radiomics and Artificial Intelligence for Prostate Cancer: A Comprehensive Review and Future Prospects DOI Open Access
Ahmad Chaddad,

Guina Tan,

Xiaojuan Liang

и другие.

Cancers, Год журнала: 2023, Номер 15(15), С. 3839 - 3839

Опубликована: Июль 28, 2023

The use of multiparametric magnetic resonance imaging (mpMRI) has become a common technique used in guiding biopsy and developing treatment plans for prostate lesions. While this is effective, non-invasive methods such as radiomics have gained popularity extracting features to develop predictive models clinical tasks. aim minimize invasive processes improved management cancer (PCa). This study reviews recent research progress MRI-based PCa, including the pipeline potential factors affecting personalized diagnosis. integration artificial intelligence (AI) with medical also discussed, line development trend radiogenomics multi-omics. survey highlights need more data from multiple institutions avoid bias generalize model. AI-based model considered promising tool good prospects application.

Язык: Английский

Процитировано

28

Call for the responsible artificial intelligence in the healthcare DOI Creative Commons
Umashankar Upadhyay, Anton Gradišek, Usman Iqbal

и другие.

BMJ Health & Care Informatics, Год журнала: 2023, Номер 30(1), С. e100920 - e100920

Опубликована: Дек. 1, 2023

The integration of artificial intelligence (AI) into healthcare is progressively becoming pivotal, especially with its potential to enhance patient care and operational workflows. This paper navigates through the complexities potentials AI in healthcare, emphasising necessity explainability, trustworthiness, usability, transparency fairness developing implementing models. It underscores 'black box' challenge, highlighting gap between algorithmic outputs human interpretability, articulates pivotal role explainable enhancing accountability applications healthcare. discourse extends ethical considerations, exploring biases dilemmas that may arise application, a keen focus on ensuring equitable use across diverse global regions. Furthermore, explores concept responsible advocating for balanced approach leverages AI's capabilities enhanced delivery ensures ethical, transparent accountable technology, particularly clinical decision-making care.

Язык: Английский

Процитировано

28

A systematic approach to enhance the explainability of artificial intelligence in healthcare with application to diagnosis of diabetes DOI Creative Commons
Yu-Cheng Wang, Toly Chen, Min-Chi Chiu

и другие.

Healthcare Analytics, Год журнала: 2023, Номер 3, С. 100183 - 100183

Опубликована: Апрель 25, 2023

Explainable artificial intelligence (XAI) tools are used to enhance the applications of existing (AI) technologies by explaining their execution processes and results. In most past research, XAI techniques typically applied only inference part AI application. This study proposes a systematic approach explainability in healthcare. Several for type 2 diabetes diagnosis taken as examples illustrate applicability proposed methodology. According experimental results, methodology were more diverse than those research. addition, an neural network was approximated simpler intuitive classification regression tree (CART) using local interpretable model-agnostic explanation (LIME). The extracted rules recommend actions users restore health.

Язык: Английский

Процитировано

25

XAI-IDS: Toward Proposing an Explainable Artificial Intelligence Framework for Enhancing Network Intrusion Detection Systems DOI Creative Commons
Osvaldo Arreche, Tanish Guntur, Mustafa Abdallah

и другие.

Applied Sciences, Год журнала: 2024, Номер 14(10), С. 4170 - 4170

Опубликована: Май 14, 2024

The exponential growth of network intrusions necessitates the development advanced artificial intelligence (AI) techniques for intrusion detection systems (IDSs). However, reliance on AI IDSs presents several challenges, including performance variability different models and opacity their decision-making processes, hindering comprehension by human security analysts. In response, we propose an end-to-end explainable (XAI) framework tailored to enhance interpretability in tasks. Our commences with benchmarking seven black-box across three real-world datasets, each characterized distinct features challenges. Subsequently, leverage various XAI generate both local global explanations, shedding light underlying rationale behind models’ decisions. Furthermore, employ feature extraction discern crucial model-specific intrusion-specific features, aiding understanding discriminative factors influencing outcomes. Additionally, our identifies overlapping significant that impact multiple models, providing insights into common patterns approaches. Notably, demonstrate computational overhead incurred generating explanations is minimal most ensuring practical applicability real-time scenarios. By offering multi-faceted equips analysts actionable make informed decisions threat mitigation. To facilitate widespread adoption further research, have made source code publicly available, serving as a foundational within research community.

Язык: Английский

Процитировано

14

An End-to-End Lightweight Multi-Scale CNN for the Classification of Lung and Colon Cancer with XAI Integration DOI Creative Commons
Mohammad Asif Hasan,

Fariha Haque,

Saifur Rahman Sabuj

и другие.

Technologies, Год журнала: 2024, Номер 12(4), С. 56 - 56

Опубликована: Апрель 21, 2024

To effectively treat lung and colon cancer save lives, early accurate identification is essential. Conventional diagnosis takes a long time requires the manual expertise of radiologists. The rising number new cases makes it challenging to process massive volumes data quickly. Different machine learning approaches classification detection have been proposed by multiple research studies. However, when comes self-learning tasks, deep (DL) excels. This paper suggests novel DL convolutional neural network (CNN) model for detecting cancer. lightweight multi-scale since uses only 1.1 million parameters, making appropriate real-time applications as provides an end-to-end solution. By incorporating features extracted at scales, can capture both local global patterns within input data. explainability tools such gradient-weighted class activation mapping Shapley additive explanation identify potential problems highlighting specific areas that impact on model’s choice. experimental findings demonstrate detection, was outperformed competition accuracy rates 99.20% achieved multi-class (containing five classes) predictions.

Язык: Английский

Процитировано

12

الدور الوسيط للذكاء الاصطناعي القابل للتفسير في العلاقة بين حوكمة البيانات والأداء المؤسسي DOI Creative Commons

مديح ناير الجداوي

المجلة العربية للعلوم الإدارية., Год журнала: 2024, Номер 30(1), С. 67 - 13

Опубликована: Июнь 4, 2024

هدف الدراسة: تهدف الدراسة إلى تعرّف أثر حوكمة البيانات في الأداء المؤسسي عبر الذكاء الاصطناعي القابل للتفسير بوصفه متغيراً وسيطاً.تصميم/ منهجية/ طريقة تنتمي هذه الدراسات الوصفية التحليلية؛ إذ تساعد تحليل الظاهرة محل من خلال الحصول على معلومات عنها، ووصف متغيراتها، وتحديد العلاقة بين المتغيرات.عينة وبياناتها: اعتمدت منهج المسح الاجتماعي بطريقة العينة لآراء عينة عشوائية، قوامها 384 المديرين التنفيذيين لتقنية المعلومات الملمين بتقنيات الاصطناعي، وجمعت باستخدام أداة الاستبانة.نتائج توصلت استنتاجات عدة، أهمها أن الممارسات المتعلقة بمتغيرات الثلاثة متوافرة بدرجة مرتفعة المنظمات، ووجود اختلاف معنوي تقديرات الخبراء نحو درجة ممارسة منظماتهم للذكاء وفقاً لاختلاف عدد سنوات الخبرة، وأن يلعبدور الوساطة الجزئية المكملة علاقة بالأداء المؤسسي؛ بلغ التأثير غير المباشر وسيطاً (0.144) والتأثير لحوكمة (0.452)؛ مما يعني الكلي قدره (0.596).أصالة لم يتم قياس تأثير المتغير المستقل (حوكمة البيانات) والمتغير التابع (الأداء المؤسسي) العربية والإنجليزية السابقة، قدر علمنا.حدود وتطبيقاتها: الحدود البشرية: طبقت الاصطناعي.الحدود الزمنية: أجريت فترة زمنية محددة استغرقت ثلاثة أشهر.الحدود الموضوعية: اقتصرت متغيرات، هي: البيانات، والذكاء للتفسير، والأداء المؤسسي.

Процитировано

11