Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations DOI
Onyekachukwu R. Okonji,

Kamol Yunusov,

Bonnie Gordon

et al.

Published: May 9, 2024

Generative AI is rapidly transforming medical imaging and text analysis, offering immense potential for enhanced diagnosis personalized care. However, this transformative technology raises crucial ethical, societal, legal questions. This paper delves into these complexities, examining issues of accuracy, informed consent, data privacy, algorithmic limitations in the context generative AI’s application to text. We explore landscape surrounding liability accountability, emphasizing need robust regulatory frameworks. Furthermore, we dissect challenges, including biases, model limitations, workflow integration. By critically analyzing challenges proposing responsible solutions, aim foster a roadmap ethical implementation healthcare, ensuring its serves humanity with utmost care precision.

Language: Английский

Large Language Models in Healthcare and Medical Domain: A Review DOI Creative Commons
Zabir Al Nazi, Wei Peng

Informatics, Journal Year: 2024, Volume and Issue: 11(3), P. 57 - 57

Published: Aug. 7, 2024

The deployment of large language models (LLMs) within the healthcare sector has sparked both enthusiasm and apprehension. These exhibit remarkable ability to provide proficient responses free-text queries, demonstrating a nuanced understanding professional medical knowledge. This comprehensive survey delves into functionalities existing LLMs designed for applications elucidates trajectory their development, starting with traditional Pretrained Language Models (PLMs) then moving present state in sector. First, we explore potential amplify efficiency effectiveness diverse applications, particularly focusing on clinical tasks. tasks encompass wide spectrum, ranging from named entity recognition relation extraction natural inference, multimodal document classification, question-answering. Additionally, conduct an extensive comparison most recent state-of-the-art domain, while also assessing utilization various open-source highlighting significance applications. Furthermore, essential performance metrics employed evaluate biomedical shedding light limitations. Finally, summarize prominent challenges constraints faced by offering holistic perspective benefits shortcomings. review provides exploration current landscape healthcare, addressing role transforming areas that warrant further research development.

Language: Английский

Citations

77

Large language models empowered agent-based modeling and simulation: a survey and perspectives DOI Creative Commons
Chen Gao, Xiaochong Lan, Nian Li

et al.

Humanities and Social Sciences Communications, Journal Year: 2024, Volume and Issue: 11(1)

Published: Sept. 27, 2024

Language: Английский

Citations

36

Multimodal Large Language Models in Healthcare: Applications, Challenges, and Future Outlook (Preprint) DOI Creative Commons
Rawan AlSaad, Alaa Abd‐Alrazaq, Sabri Boughorbel

et al.

Journal of Medical Internet Research, Journal Year: 2024, Volume and Issue: 26, P. e59505 - e59505

Published: Aug. 20, 2024

In the complex and multidimensional field of medicine, multimodal data are prevalent crucial for informed clinical decisions. Multimodal span a broad spectrum types, including medical images (eg, MRI CT scans), time-series sensor from wearable devices electronic health records), audio recordings heart respiratory sounds patient interviews), text notes research articles), videos surgical procedures), omics genomics proteomics). While advancements in large language models (LLMs) have enabled new applications knowledge retrieval processing field, most LLMs remain limited to unimodal data, typically text-based content, often overlook importance integrating diverse modalities encountered practice. This paper aims present detailed, practical, solution-oriented perspective on use (M-LLMs) field. Our investigation spanned M-LLM foundational principles, current potential applications, technical ethical challenges, future directions. By connecting these elements, we aimed provide comprehensive framework that links aspects M-LLMs, offering unified vision their care. approach guide both practical implementations M-LLMs care, positioning them as paradigm shift toward integrated, data–driven We anticipate this work will spark further discussion inspire development innovative approaches next generation systems.

Language: Английский

Citations

33

Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports DOI
Bastien Le Guellec,

Alexandre Lefèvre,

Charlotte Geay

et al.

Radiology Artificial Intelligence, Journal Year: 2024, Volume and Issue: 6(4)

Published: May 8, 2024

Purpose To assess the performance of a local open-source large language model (LLM) in various information extraction tasks from real-life emergency brain MRI reports. Materials and Methods All consecutive reports written 2022 French quaternary center were retrospectively reviewed. Two radiologists identified scans that performed department for headaches. Four scored reports' conclusions as either normal or abnormal. Abnormalities labeled headache-causing incidental. Vicuna (LMSYS Org), an LLM, same tasks. Vicuna's metrics evaluated using radiologists' consensus reference standard. Results Among 2398 during study period, 595 included headaches indication (median age patients, 35 years [IQR, 26-51 years]; 68% [403 595] women). A positive finding was reported 227 (38%) cases, 136 which could explain headache. The LLM had sensitivity 98.0% (95% CI: 96.5, 99.0) specificity 99.3% 98.8, 99.7) detecting presence headache clinical context, 99.4% 98.3, 99.9) 98.6% 92.2, 100.0) use contrast medium injection, 96.0% 92.5, 98.2) 98.9% 97.2, categorization abnormal, 88.2% 81.6, 93.1) 73% 62, 81) causal inference between findings Conclusion An able to extract free-text radiology with excellent accuracy without requiring further training.

Language: Английский

Citations

27

(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice DOI Creative Commons
Inyoung Cheong, King Xia, K. J. Kevin Feng

et al.

2022 ACM Conference on Fairness, Accountability, and Transparency, Journal Year: 2024, Volume and Issue: 67, P. 2454 - 2469

Published: June 3, 2024

Large language models (LLMs) are increasingly capable of providing users with advice in a wide range professional domains, including legal advice. However, relying on LLMs for queries raises concerns due to the significant expertise required and potential real-world consequences To explore when why should or not provide users, we conducted workshops 20 experts using methods inspired by case-based reasoning. The provided realistic ("cases") allowed examine granular, situation-specific overarching technical constraints, producing concrete set contextual considerations LLM developers. By synthesizing factors that impacted response appropriateness, present 4-dimension framework: (1) User attributes behaviors, (2) Nature queries, (3) AI capabilities, (4) Social impacts. We share experts' recommendations strategies, which center around helping identify 'right questions ask' relevant information rather than definitive judgments. Our findings reveal novel considerations, such as unauthorized practice law, confidentiality, liability inaccurate advice, have been overlooked literature. deliberation method enabled us elicit fine-grained, practice-informed insights surpass those from de-contextualized surveys speculative principles. These underscore applicability our translating domain-specific knowledge practices into policies can guide behavior more responsible direction.

Language: Английский

Citations

24

A systematic review of the first year of publications on ChatGPT and language education: Examining research on ChatGPT’s use in language learning and teaching DOI Creative Commons

Belle Li,

Victoria L. Lowell, Chaoran Wang

et al.

Computers and Education Artificial Intelligence, Journal Year: 2024, Volume and Issue: 7, P. 100266 - 100266

Published: July 17, 2024

This systematic review aims to explore published research on the use of ChatGPT in language learning between November 2022 and 2023, outlining types papers, methodologies adopted, publishing journals, major trends, topics interest, existing gaps demanding attention. The PRISMA framework was utilized capture latest articles, selecting 36 articles that met inclusion criteria. Findings extracted from this include (1) authors worldwide contribute topic, with Asia North America leading; (2) wide distribution across various journals underscores interdisciplinary nature such as computer science, psychology, linguistics, education, other social sciences; (3) empirical dominates literature is published, majority focusing higher education ethical considerations. Other findings plays multifaceted roles, supporting self-directed learning, content generation, teacher workflows. Research need for diversified scopes, longitudinal studies, exploration stakeholders' perceptions, assessments feedback quality.

Language: Английский

Citations

24

Survey on Explainable AI: Techniques, challenges and open issues DOI
Adel Abusitta, Miles Q. Li, Benjamin C. M. Fung

et al.

Expert Systems with Applications, Journal Year: 2024, Volume and Issue: 255, P. 124710 - 124710

Published: July 7, 2024

Language: Английский

Citations

22

Explainable Generative AI (GenXAI): a survey, conceptualization, and research agenda DOI Creative Commons
Johannes Schneider

Artificial Intelligence Review, Journal Year: 2024, Volume and Issue: 57(11)

Published: Sept. 15, 2024

Language: Английский

Citations

21

Reducing Hallucinations in Large Language Models Through Contextual Position Encoding DOI Open Access

Sarah Desrochers,

James Wilson,

Matthew Beauchesne

et al.

Published: May 31, 2024

In natural language processing, maintaining factual accuracy and minimizing hallucinations in text generation remain significant challenges. Contextual Position Encoding (CPE) presents a novel approach by dynamically encoding positional information based on the context of each token, significantly enhancing model's ability to generate accurate coherent text. The integration CPE into Mistral Large model resulted marked improvements precision, recall, F1-score, demonstrating superior performance over traditional methods. Furthermore, enhanced architecture effectively reduced hallucination rates, increasing reliability generated outputs. Comparative analysis with baseline models such as GPT-3 BERT confirmed efficacy CPE, highlighting its potential influence future developments LLM architecture. results underscore importance advanced techniques improving applicability large across various domains requiring high accuracy.

Language: Английский

Citations

20

A Comparative Analysis of Large Language Models to Evaluate Robustness and Reliability in Adversarial Conditions DOI Creative Commons

Takeshi Goto,

Kensuke Ono, Akira Morita

et al.

Published: March 29, 2024

This study went on a comprehensive evaluation of four prominent Large Language Models (LLMs) -Google Gemini, Mistral 8x7B, ChatGPT-4, and Microsoft Phi-1.5 -to assess their robustness reliability under variety adversarial conditions.Utilizing the PromptBench dataset, research investigates each model's performance against syntactic manipulations, semantic alterations, contextually misleading cues.The findings reveal notable differences in model resilience, highlighting distinct strengths weaknesses LLM responding to challenges.Comparative analysis underscores necessity for multifaceted approaches enhance suggesting future directions involving augmentation training datasets with examples exploration advanced natural language understanding algorithms.This contributes ongoing discourse by providing insights into vulnerabilities advocating strategies bolster evolving landscape threats.

Language: Английский

Citations

19