Adam Optimization of Burger’s Equation Using Physics-Informed Neural Networks DOI
Soumyendra Singh,

Dharminder Chaudhary,

Bhagavatula Yogiraj

et al.

Published: May 5, 2023

In this paper, physics informed neural networks are used for numerical approximation of partial differential equations. The data which is in the process generated by Latin Hypercube sampling has been discussed. Adam optimization technique implemented to minimize loss discussed equation. above proposed methodology applied Burger's equation and obtained results have section 5. Loss function graphs also provided showcase efficiency methodologies.

Language: Английский

Photoplethysmogram Analysis and Applications: An Integrative Review DOI Creative Commons
Junyung Park, Hyeon Seok Seok, Sang-Su Kim

et al.

Frontiers in Physiology, Journal Year: 2022, Volume and Issue: 12

Published: March 1, 2022

Beyond its use in a clinical environment, photoplethysmogram (PPG) is increasingly used for measuring the physiological state of an individual daily life. This review aims to examine existing research on concerning generation mechanisms, measurement principles, applications, noise definition, pre-processing techniques, feature detection and post-processing techniques processing, especially from engineering point view. We performed extensive search with PubMed, Google Scholar, Institute Electrical Electronics Engineers (IEEE), ScienceDirect, Web Science databases. Exclusion conditions did not include year publication, but articles published English were excluded. Based 118 articles, we identified four main topics enabling PPG: (A) PPG waveform, (B) features applications including basic based original combined PPG, derivative (C) motion artifact baseline wandering hypoperfusion, (D) signal processing preprocessing, peak detection, quality index. The application field has been extending mobile environment. Although there no standardized pipeline as data are acquired accumulated various ways, recently proposed machine learning-based method expected offer promising solution.

Language: Английский

Citations

183

LightX3ECG: A Lightweight and eXplainable Deep Learning System for 3-lead Electrocardiogram Classification DOI
Khiem H. Le, Hieu H. Pham, Thao Nguyen

et al.

Biomedical Signal Processing and Control, Journal Year: 2023, Volume and Issue: 85, P. 104963 - 104963

Published: April 14, 2023

Language: Английский

Citations

27

Cardiovascular Diseases DOI
Sabyasachi Pramanik, Alex Khang

Advances in medical diagnosis, treatment, and care (AMDTC) book series, Journal Year: 2024, Volume and Issue: unknown, P. 274 - 287

Published: Jan. 5, 2024

The artificial intelligence clinical decision support system, or AI-CDSS, is a potent tool that helps medical practitioners make well-informed, evidence-based choices about patient care. To provide individualised advice and insights, it makes use of data analysis methods algorithms. advantages features the AI-CDSS are examined in this which includes real-time alerts monitoring, continuous learning improvement, medication interactions adverse event identification, diagnostic treatment recommendations, analysis, predictive analytics. Additionally, model addresses AI-driven decision-making systems healthcare industry, with particular attention to diagnosis cancer, management chronic diseases, optimisation, surgical support, control infectious disease outbreaks, radiology imaging, mental health trials research.

Language: Английский

Citations

13

Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning DOI
Emrullah Şahin, Naciye Nur Arslan, Durmuş Özdemir

et al.

Neural Computing and Applications, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 18, 2024

Language: Английский

Citations

10

Deep learning and electrocardiography: systematic review of current techniques in cardiovascular disease diagnosis and management DOI Creative Commons

Z. Wu,

Caixia Guo

BioMedical Engineering OnLine, Journal Year: 2025, Volume and Issue: 24(1)

Published: Feb. 23, 2025

Language: Английский

Citations

1

Building an Explainable Diagnostic Classification Model for Brain Tumor using Discharge Summaries DOI Open Access
Priyanka C Nair, Deepa Gupta, Bhagavatula Indira Devi

et al.

Procedia Computer Science, Journal Year: 2023, Volume and Issue: 218, P. 2058 - 2070

Published: Jan. 1, 2023

A brain tumor is a mass of cells growing abnormally in the brain. The lesions formed suprasellar region brain, called lesions, affect common anatomical locations causing an array symptoms, including headache and blurred or low vision. These symptoms lead to misdiagnosis as issues like refractive index problems, gets diagnosed very late. This study focuses on these (namely Pituitary adenoma, Craniopharyngioma, Meningioma), which have not been explored much using machine learning. We collected 422 discharge summaries patients admitted neurosurgery department National Institute Mental Health Neuroscience (NIMHANS), Bangalore, India, during 2014-2019. work aims build model for classifying into three categories. Features are clinical concepts identified from summary Natural Language Processing (NLP) regular expression-based rules. features corresponding values thus extracted represented Analytical Base Table fed classification after processing. utilizes XGBoost, Local Cascade Ensemble, Histogram-based gradient boosting, LightGBM, CatBoost classifiers, ability inherently handle missing data. Though learning models perform well classification, interpretability generalizability often questioned especially critical domains such medical healthcare. Hence performance has analyzed ELI5 tool, python package explainable AI. tool identifies data patient basis, providing more interpretable clinicians.

Language: Английский

Citations

18

Explainable Artificial Intelligence: Counterfactual Explanations for Risk-Based Decision-Making in Construction DOI
Jianjiang Zhan, Weili Fang, Peter E.D. Love

et al.

IEEE Transactions on Engineering Management, Journal Year: 2024, Volume and Issue: 71, P. 10667 - 10685

Published: Jan. 1, 2024

Artificial intelligence (AI) approaches, such as deep learning models, are increasingly used to determine risks in construction. However, the black-box nature of AI models makes their inner workings difficult understand and interpret. Deploying explainable artificial (XAI) can help explain why how output is generated. This article addresses following research question: How we accurately identify critical factors influencing tunnel-induced ground settlement provide counterfactual explanations support risk-based decision-making? We apply an XAI approach using decision-making surrounding when considering control settlement. Our consists a: 1) construction Kernel principal components analysis-based neural network (DNN) model; 2) generation explanations; 3) analysis risk prediction assessment factors' importance, necessity, sufficiency. our San-yang road tunnel project Wuhan, China. The results demonstrate that KPCA-DNN model better predicted based on high-dimensional input features than baseline (i.e., AdaBoost RandomForest). bubble chamber pressure→ cutter-head speed→ equipment inclination also identified primary path. findings indicate enables transparency trust AI-based be acquired. Moreover, site managers, engineers, tunnel-boring machine operators manage mitigate

Language: Английский

Citations

6

Reviewing CAM-Based Deep Explainable Methods in Healthcare DOI Creative Commons
Dan Tang,

J.J. Chen,

Lijuan Ren

et al.

Applied Sciences, Journal Year: 2024, Volume and Issue: 14(10), P. 4124 - 4124

Published: May 13, 2024

The use of artificial intelligence within the healthcare sector is consistently growing. However, majority deep learning-based AI systems are a black box nature, causing these to suffer from lack transparency and credibility. Due widespread adoption medical imaging for diagnostic purposes, industry frequently relies on methods that provide visual explanations, enhancing interpretability. Existing research has summarized explored usage explanation in domain, providing introductions have been employed. existing reviews used interpretable analysis field ignoring comprehensive Class Activation Mapping (CAM) because researchers typically categorize CAM under broader umbrella explanations without delving into specific applications sector. Therefore, this study primarily aims analyze CAM-based explainable industry, following PICO (Population, Intervention, Comparison, Outcome) framework. Specifically, we selected 45 articles systematic review comparative three databases—PubMed, Science Direct, Web Science—and then compared eight advanced using five datasets assist method selection. Finally, current hotspots future challenges application field.

Language: Английский

Citations

6

An Interpretable and Accurate Deep-Learning Diagnosis Framework Modeled With Fully and Semi-Supervised Reciprocal Learning DOI
Chong Wang,

Yuanhong Chen,

Fengbei Liu

et al.

IEEE Transactions on Medical Imaging, Journal Year: 2023, Volume and Issue: 43(1), P. 392 - 404

Published: Aug. 21, 2023

The deployment of automated deep-learning classifiers in clinical practice has the potential to streamline diagnosis process and improve accuracy, but acceptance those relies on both their accuracy interpretability. In general, accurate provide little model interpretability, while interpretable models do not have competitive classification accuracy. this paper, we introduce a new framework, called InterNRL, that is designed be highly interpretable. InterNRL consists student-teacher where student an prototype-based classifier (ProtoPNet) teacher global image (GlobalNet). two are mutually optimised with novel reciprocal learning paradigm which ProtoPNet learns from optimal pseudo labels produced by GlobalNet, GlobalNet ProtoPNet's performance labels. This enables flexibly under fully- semi-supervised scenarios, reaching state-of-the-art scenarios for tasks breast cancer retinal disease diagnosis. Moreover, relying weakly-labelled training images, also achieves superior localisation brain tumour segmentation results than other competing methods.

Language: Английский

Citations

14

Explainable Transfer Learning for Modeling and Assessing Risks in Tunnel Construction DOI
Hanbin Luo, Jialin Chen, Peter E.D. Love

et al.

IEEE Transactions on Engineering Management, Journal Year: 2024, Volume and Issue: 71, P. 8339 - 8355

Published: Jan. 1, 2024

Deep learning models are black boxes. Thus, determining the source domain data contributing to transfer for ground settlement prediction is impossible. The research presented in this article aims determine (i.e., dataset or used model pre-training) that contributes most risk tunnel construction and quantify its contribution improving accuracy. We propose a novel explainable approach selection of degraded knowledge from sub-source domains. Our comprises: (1) feature space point clustering; (2) similarity metric between target each domain; (3) stacked Neural Network with selective learning. apply our real-life project demonstrate feasibility effectiveness. results indicate that: proposed outperforms other transparent opaque analysis on R 2 above 0.5 by adjusting clustering, transferring, freezing strategy; optimal number layers should be less than half total layers, best 1. show explaining enables transparency training understanding data, prediction.

Language: Английский

Citations

5