Diagnostics and Therapy for Malignant Tumors DOI Creative Commons
Chih‐Yang Tsai, Chunyu Wang,

Hasok Chang

et al.

Biomedicines, Journal Year: 2024, Volume and Issue: 12(12), P. 2659 - 2659

Published: Nov. 21, 2024

Malignant tumors remain one of the most significant global health challenges and contribute to high mortality rates across various cancer types. The complex nature these requires multifaceted diagnostic therapeutic approaches. This review explores current advancements in methods, including molecular imaging, biomarkers, liquid biopsies. It also delves into evolution strategies, surgery, chemotherapy, radiation therapy, novel targeted therapies such as immunotherapy gene therapy. Although progress has been made understanding biology, future oncology lies integration precision medicine, improved tools, personalized approaches that address tumor heterogeneity. aims provide a comprehensive overview state diagnostics treatments while highlighting emerging trends lie ahead.

Language: Английский

The Interaction Quality of AI and User Acceptance: The Chain Mediation Effect of Psychological Distance and Trust DOI

雅萍 林

Advances in Psychology, Journal Year: 2025, Volume and Issue: 15(02), P. 47 - 57

Published: Jan. 1, 2025

Language: Английский

Citations

0

Opinion: Mental health research: to augment or not to augment DOI Creative Commons

Argyrios Perivolaris,

Alice Rueda,

Karisa Parkington

et al.

Frontiers in Psychiatry, Journal Year: 2025, Volume and Issue: 16

Published: Feb. 18, 2025

The integration of artificial intelligence (AI) and machine learning (ML) into healthcare is growing, with tools often limited by data scarcity biases; issues particularly pronounced in mental health research. Data augmentation, a method artificially expanding datasets, holds promise for addressing these challenges creating synthetic data, improving diversity, reducing costs. This has shown success medical imaging, yet datasets face unique barriers, including subjective measurements, privacy concerns, underrepresentation marginalized groups. Augmented can help balance enhance diagnostic accuracy, improve generalizability, enabling more equitable AI models. However, risks such as replicating existing biases, losing cultural context, producing clinically unreliable augmented require careful consideration. In health, small variations influence outcomes significantly, poorly designed augmentation could oversimplify complex experiences. To harmonize potential caution, should complement real-world be rigorously evaluated clinicians alignment expertise. Ethical implications, consent privacy, demand frameworks to ensure are responsibly used. While offers exciting opportunities advance research, its implementation must prioritize transparency, clinical fidelity, equity.In recent years, there been significant growth the use healthcare. AI-based increasingly used predict diagnoses, personalize treatment plans, assess risk factors, aiming enable scalable care solutions. research availability highquality large sample confounded multifaceted complexities human behaviors emotions. 1 address this gap, researchers have begun utilize techniques expand available datasets. Generating new enables models larger complete training Consider imaging field, where become prominent fixture practice. demonstrated benefits across all organs modalities promote without investing time resources collecting samples. 2 presents many barriers integrating augmentation. Biases inherent original set remain result overfitting model unable make accurate predictions from any other than data. article explores overcome due lack representative how interact ML advancements. We explore tool bridge offering an integrative perspective on ethical practical challenges. As consider it critical evaluate through rigorous methodologies decide whether 'to augment or not augment'.As algorithms advanced exponentially one most limiting factors remains that determines performance. 3 contrast creates scratch, technique create based points, thereby dataset. There several methods, some incorporating simple transformations text (rotating images random degrees, flipping horizontally, back-translation language). Expanding this, generative adversarial network (GAN) sophisticated strategy uses neural networks novel samples pre-existing For example, GANs chest X-rays only classification accuracy but perform better transformation methods. 4 Large language (LLMs), GPT-4o, also transcript 5 With various strategies available, fields moving forward. Augmentation class imbalance while preserving anonymity, facilitating cross-lingual robust may generalizability facilitate introduces concerns about because balancing realism mitigating biases.To Augment: Overcoming Scarceness challenge Unlike areas medicine objective biomarkers relies qualitative interviews, self-reported surveys, questionnaires, notes. nature concepts, emotional well-being, 6 makes developing universally accepted definitions challenging. Despite measurements being cost-efficient, flexible, valuable uncovering personal perceptions, 7 do provide comprehensive, diverse, sufficient necessary generalizable reliable Furthermore, collection hindered high costs, stigma, recruitment difficulties. promising opportunity issues. By generating audio, increase usable mitigate dependency reports experiences, scalability studies. 8 cost-effective alternative reliance expensive longitudinal using limit personally identifiable information, enhancing protection. well, instead focus efforts insights testing hypotheses readily Mental highly imbalanced, certain conditions underrepresented (e.g., borderline personality disorder) compared others depression), gender disparities diagnosis, treatment, Rare disorders present uncommon symptoms, which complicate diagnoses. 9 Moreover, populations-such children, seniors, racial minorities, LGBTQ2+, groups-are lead biased conclusions predictive models, perpetuate further marginalize underserved populations. Addressing issues, balanced increasing representation minority classes, allowing detect treat AI-generated impute missing information leading inclusive instance, psychiatric symptoms manifest differently age groups genders, adolescents adults experiencing distinct presentations similar conditions. 10 allows subgroups, outcomes. crucial scaling Introducing variations, noise injection, less prone overfitting. increased variability learn general patterns rather memorizing specific instances, thus their different patient beneficial behavior team studying depressive population skewed towards symptom severity levels. An trained when applied lower 11 achieve mimics cases. 12 Incorporating practice outcomes, decision-support developed, recommendations. Not Bias Clinician Fidelity nuanced profoundly contextual, perceptions potentially type multifactorial complicated biological, psychological, social components. One amplifying biases If dataset underrepresents cultural, gender, ethnic groups, embedded model. Poorly if inspected professionals, fail respect interplay symptomatology mistakenly intensify dataset, introduce biases. 13,14 historical regarding race, socioeconomic status well-documented mitigated. 15 Creating loss meaning, especially individual differences simplified. generalized stereotypes poor 16 complexity identities intersecting inaccurate predictions, inconsistencies problematic coping mechanisms vary greatly cultures language, values, stigma around illness. generated consideration contexts might development misinterpret Traditional diverse homogeneous, few risking generalization misrepresentation. expertise ability reproduce presentation. 17 rely diagnosis treatment. Adding alter key features, causing context understanding transcripts interviews subtle linguistic cues clinician's judgment. 18 disconnect appear theory translate support. Evaluating quality challenging psychological assessments consistent validation benchmarks.Given perspectives argument, proceed data? cautious optimism. dismissed outright, integrated way preserves transparency mitigates bias. approach supplement replace Combining both traditional diversity over-relying information. Models professionals standard metrics appraisals. will data-driven align integrate effectively, pilot feasibility studies practicality Collaborative findings related bias, equity, implementation. Societal norms heavily expressed, understood, treated. emphasize physical like headaches, behavioural aspects. 19 distributional properties those sizes, imbalances, features. within well-augmented likely generalize effectively exhibit reduced Importantly, nuances captured unevenly, representation. subtleties, fairness applicability. ethnographic consulting experts during creation data's 20 financial implications important promoting global equity. Researchers wealthier regions greater access funding, exacerbating inequalities. 21 contrast, adopting technologies. Open Science initiatives sharing tools, broader access. 21,22 Publicly platforms democratize opportunities, protocols requiring disclose methods foster collaboration reduce disparities. equity distributed equitably communities. 22 Finally, overlooked. however, realistic raises questions transparency. addition, ethicists need develop clear guidelines clinicians' preferred practices optimize confidentiality. Frameworks positive clinician-AI interactions undertake same inspection successfully implemented settings. 23,24 frontier, contributing long-standing imbalance. useful fields, imaging. introducing field handled caution. enhanced performance desirable, unreliability, feasibility.We extend our gratitude whose foundational work inspired commentary. Special thanks Interventional Psychiatry Program lab members at St. Michael's Hospital feedback.

Language: Английский

Citations

0

The PERFORM Study: AI Versus Human Residents in Cross-Sectional Obstetrics-Gynecology Scenarios Across Languages and Time Constraints DOI Creative Commons
Canio Martinelli, Antonio Giordano, Vincenzo Carnevale

et al.

Mayo Clinic Proceedings Digital Health, Journal Year: 2025, Volume and Issue: unknown, P. 100206 - 100206

Published: March 1, 2025

Language: Английский

Citations

0

Neonatal nurses’ experiences with generative AI in clinical decision-making: a qualitative exploration in high-risk nicus DOI Creative Commons
Abeer Nuwayfi Alruwaili, Afrah Madyan Alshammari,

Ali Alhaiti

et al.

BMC Nursing, Journal Year: 2025, Volume and Issue: 24(1)

Published: April 7, 2025

Neonatal nurses in high-risk Intensive Care Units (NICUs) navigate complex, time-sensitive clinical decisions where accuracy and judgment are critical. Generative artificial intelligence (AI) has emerged as a supportive tool, yet its integration raises concerns about impact on nurses' decision-making, professional autonomy, organizational workflows. This study explored how neonatal experience integrate generative AI examining influence nursing practice, dynamics, cultural adaptation Saudi Arabian NICUs. An interpretive phenomenological approach, guided by Complexity Science, Normalization Process Theory, Tanner's Clinical Judgment Model, was employed. A purposive sample of 33 participated semi-structured interviews focus groups. Thematic analysis used to code interpret data, supported an inter-rater reliability 0.88. Simple frequency counts were included illustrate the prevalence themes but not quantitative measures. Trustworthiness ensured through reflexive journaling, peer debriefing, member checking. Five emerged: (1) Decision-Making, 93.9% reported that AI-enhanced required human validation; (2) Professional Practice Transformation, with 84.8% noting evolving role boundaries workflow changes; (3) Organizational Factors, 97.0% emphasized necessity infrastructure, training, policy integration; (4) Cultural Influences, 87.9% highlighting AI's alignment family-centered care; (5) Implementation Challenges, 90.9% identified technical barriers strategies. can support effectiveness depends structured reliable culturally sensitive implementation. These findings provide evidence-based insights for policymakers healthcare leaders ensure enhances expertise while maintaining safe, patient-centered care.

Language: Английский

Citations

0

The Transformative Impact of Artificial Intelligence on the Digital Maturity of Hospitals DOI

John Gachago

Published: Jan. 1, 2025

Language: Английский

Citations

0

AI-Enhanced Neurophysiological Assessment DOI
Deepak Kumar, Punet Kumar,

Sushma Pal

et al.

Advances in psychology, mental health, and behavioral studies (APMHBS) book series, Journal Year: 2025, Volume and Issue: unknown, P. 33 - 64

Published: Jan. 3, 2025

Advancements in artificial intelligence (AI) are revolutionizing neurophysiology, enhancing precision and efficiency assessing brain nervous system function. AI-driven neurophysiological assessment integrates machine learning, deep neural networks, advanced data analytics to process complex from electroencephalography, electromyography techniques. This technology enables earlier diagnosis of neurological disorders like epilepsy Alzheimer's by detecting subtle patterns that may be missed human analysis. AI also facilitates real-time monitoring predictive analytics, improving outcomes critical care neurorehabilitation. Challenges include ensuring quality, addressing ethical concerns, overcoming computational limits. The integration into neurophysiology offers a precise, scalable, accessible approach treating disorders. chapter discusses the methodologies, applications, future directions assessment, emphasizing its transformative impact clinical research fields.

Language: Английский

Citations

0

When AI Eats the Healthcare World - Is Trusting AI Fed, or Earned? (Preprint) DOI Creative Commons
Syaheerah Lebai Lutfi,

Adhari Al Zaabi,

Geshina Ayu Mat Saat

et al.

Published: March 23, 2025

BACKGROUND Perception-based studies are susceptible to bias introduced through the design of instruments used. We demonstrate need shift from perception-based usage-based trust evaluation, emphasizing that must be earned demonstrated reliability rather than assumed pre-adoption surveys. Our findings suggest successful AI implementation requires a proactive approach addresses complex interplay human, technical, and organizational factors, grounded in real-world usage data theoretical, perception-driven acceptance measures. OBJECTIVE To examine disconnect between expectations post-implementation realities healthcare systems. METHODS assessed key perceptive-driven models, namely Unified Theory Acceptance Use Technology (UTAUT), Model (TAM), Diffusion Innovation (DOI) with regards healthcare. then matched using these models real results post-usage evidences. RESULTS Through empirical anecdotal evidence, this paper demonstrates technology adoption frameworks usage, focusing on human factors influence shortcomings current perception-focused research. CONCLUSIONS Real-world hype fall short, underly reluctance or resistance providers fully adopt AI.

Language: Английский

Citations

0

AI-Enhanced Multi-Algorithm R Shiny App for Predictive Modeling and Analytics- A Case study of Alzheimer’s Disease Diagnostics (Preprint) DOI
Samuel Kakraba, Wenzheng Han, Sudesh Srivastav

et al.

Published: Dec. 18, 2024

BACKGROUND Recent studies have demonstrated that AI can surpass medical practitioners in diagnostic accuracy, underscoring the increasing importance of AI-assisted diagnosis healthcare. This research introduces SMART-Pred (Shiny Multi-Algorithm R Tool for Predictive Modeling), an innovative AI-based application Alzheimer's disease (AD) prediction utilizing handwriting analysis OBJECTIVE Our objective is to develop and evaluate a non-invasive, cost-effective, efficient tool early AD detection, addressing need accessible accurate screening methods. METHODS methodology employs comprehensive approach AI-driven prediction. We begin with Principal Component Analysis dimensionality reduction, ensuring processing complex data. followed by training evaluation ten diverse, highly optimized models, including logistic regression, Naïve Bayes, random forest, AdaBoost, Support Vector Machine, neural networks. multi-model allows robust comparison different machine learning techniques To rigorously assess model performance, we utilize range metrics sensitivity, specificity, F1-score, ROC-AUC. These provide holistic view each model's predictive capabilities. For validation, leveraged DARWIN dataset, which comprises samples from 174 participants (89 patients 85 healthy controls). balanced dataset ensures fair our models' ability distinguish between individuals based on characteristics. RESULTS The forest strong achieving accuracy 88.68% test set during analysis. Meanwhile, AdaBoost algorithm exhibited even higher reaching 92.00% after leveraging models identify most significant variables predicting disease. results current clinical tools, typically achieve around 81.00% accuracy. SMART-Pred's performance aligns recent advancements prediction, such as Cambridge scientists' 82.00% identifying progression within three years using cognitive tests MRI scans. Furthermore, revealed consistent pattern across all employed. "air_time" "paper_time" consistently stood out critical predictors (AD). two factors were repeatedly identified influential assessing probability onset, their potential detection risk assessment CONCLUSIONS Even though some limitations exist SMART-Pred, it offers several advantages, being efficient, customizable datasets diagnostics. study demonstrates transformative healthcare, particularly may contribute improved patient outcomes through intervention. Clinical validation necessary confirm whether key this are sufficient accurately real-world settings. step crucial ensure practical applicability reliability these findings practice.

Language: Английский

Citations

1

Trust in AI-Based Clinical Decision Support Systems Among Healthcare Workers: A Systematic Review (Preprint) DOI Creative Commons
Hein Minn Tun, Hanif Abdul Rahman, Lin Naing

et al.

Published: Dec. 5, 2024

BACKGROUND Artificial intelligence-based Clinical Decision Support Systems (AI-CDSS) have offered personalized medicine and improved healthcare efficiency to workers. Despite opportunities, trust in these tools remains a critical factor for their successful integration. Existing research lacks synthesized insights actionable recommendations providing workers' AI-CDSS. OBJECTIVE The study aims identify synthesize factors guiding designing systems that foster worker METHODS We performed systematic review of published studies from January 2020 November 2024 were retrieved PubMed, Scopus, Google Scholar, focusing on workers’ perceptions, experiences, Two independent reviewers utilized the Cochrane Collaboration Handbook PRISMA guidelines develop data charter data. CASP tool was applied assess quality included evaluate risk bias, ensuring rigorous process. RESULTS 27 met inclusion criteria, across diverse workers predominantly hospitalized settings. Qualitative methods dominated (n=16,59%), with sample sizes ranging small focus groups over 1,000 participants. Seven key themes identified: System Transparency, Training Familiarity, Usability, Reliability, Credibility Validation, Ethical Considerations, Customization Control through enablers barriers impact AI-based CDSS. CONCLUSIONS From seven thematic areas, such as transparency, training, usability, clinical reliability, while include algorithmic opacity ethical concerns. Recommendations emphasize explainability AI models, comprehensive stakeholder involvement, human-centered design

Language: Английский

Citations

0

Trust in AI-Based Clinical Decision Support Systems Among Healthcare Workers: A Systematic Review (Preprint) DOI Creative Commons
Hein Minn Tun, Hanif Abdul Rahman, Lin Naing

et al.

Journal of Medical Internet Research, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 5, 2024

Language: Английский

Citations

0