Transforming Healthcare: The AI Revolution in the Comprehensive Care of Hypertension DOI Creative Commons
Sreyoshi F. Alam, Maria Lourdes Gonzalez Suarez

Clinics and Practice, Journal Year: 2024, Volume and Issue: 14(4), P. 1357 - 1374

Published: July 10, 2024

This review explores the transformative role of artificial intelligence (AI) in hypertension care, summarizing and analyzing published works from last three years this field. Hypertension contributes to a significant healthcare burden both at an individual global level. We focus on five key areas: risk prediction, diagnosis, education, monitoring, management hypertension, supplemented with brief look into hypertensive disease pregnancy. For each area, we discuss advantages disadvantages integrating AI. While AI, its current rudimentary form, cannot replace sound clinical judgment, it can still enhance faster prevention, management. The integration AI is poised revolutionize although careful implementation ongoing research are essential mitigate risks.

Language: Английский

Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study DOI Creative Commons
Yasutaka Yanagita, Daiki Yokokawa, Shun Uchida

et al.

JMIR Formative Research, Journal Year: 2023, Volume and Issue: 7, P. e48023 - e48023

Published: Oct. 3, 2023

ChatGPT (OpenAI) has gained considerable attention because of its natural and intuitive responses. sometimes writes plausible-sounding but incorrect or nonsensical answers, as stated by OpenAI a limitation. However, considering that is an interactive AI been trained to reduce the output unethical sentences, reliability training data high usefulness content promising. Fortunately, in March 2023, new version ChatGPT, GPT-4, was released, which, according internal evaluations, expected increase likelihood producing factual responses 40% compared with predecessor, GPT-3.5. The this English widely appreciated. It also increasingly being evaluated system for obtaining medical information languages other than English. Although it does not reach passing score on national examination Chinese, accuracy gradually improve. Evaluation Japanese input limited, although there have reports ChatGPT's answers clinical questions regarding Society Hypertension guidelines performance National Nursing Examination.The objective study evaluate whether can provide accurate diagnoses knowledge input.Questions from Medical Licensing Examination (NMLE) Japan, administered Ministry Health, Labour Welfare 2022, were used. All 400 included. Exclusion criteria figures tables could recognize; only text extracted. We instructed GPT-3.5 GPT-4 they correct each question. verified 2 general practice physicians. In case discrepancies, checked another physician make final decision. overall calculating percentage GPT-4.Of questions, 292 analyzed. Questions containing charts, which are supported excluded. response rate 81.5% (237/292), significantly higher GPT-3.5, 42.8% (125/292). Moreover, surpassed standard (>72%) NMLE, indicating potential diagnostic therapeutic decision aid physicians.GPT-4 reached NMLE entered Japanese, limited written questions. As accelerated progress past few months shown, will improve large language model continues learn more, may well become support professionals providing more information.

Language: Английский

Citations

64

Generative artificial intelligence in healthcare: A scoping review on benefits, challenges and applications DOI
Khadijeh Moulaei,

Atiye Yadegari,

Mahdi Baharestani

et al.

International Journal of Medical Informatics, Journal Year: 2024, Volume and Issue: 188, P. 105474 - 105474

Published: May 8, 2024

Language: Английский

Citations

46

A Systematic Review of ChatGPT and Other Conversational Large Language Models in Healthcare DOI Creative Commons

Leyao Wang,

Zhiyu Wan, Congning Ni

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: April 27, 2024

Abstract Background The launch of the Chat Generative Pre-trained Transformer (ChatGPT) in November 2022 has attracted public attention and academic interest to large language models (LLMs), facilitating emergence many other innovative LLMs. These LLMs have been applied various fields, including healthcare. Numerous studies since conducted regarding how employ state-of-the-art health-related scenarios assist patients, doctors, health administrators. Objective This review aims summarize applications concerns applying conversational healthcare provide an agenda for future research on Methods We utilized PubMed, ACM, IEEE digital libraries as primary sources this review. followed guidance Preferred Reporting Items Systematic Reviews Meta-Analyses (PRIMSA) screen select peer-reviewed articles that (1) were related both (2) published before September 1 st , 2023, date when we started paper collection screening. investigated these papers classified them according their concerns. Results Our search initially identified 820 targeted keywords, out which 65 met our criteria included most popular LLM was ChatGPT from OpenAI (60), by Bard Google (1), Large Language Model Meta AI (LLaMA) (5). into four categories terms applications: 1) summarization, 2) medical knowledge inquiry, 3) prediction, 4) administration, concerns: reliability, bias, privacy, acceptability. There are 49 (75%) using summarization and/or 58 (89%) expressing about reliability bias. found exhibit promising results providing patients with a relatively high accuracy. However, like not able reliable answers complex tasks require specialized domain expertise. Additionally, no experiments reviewed thoughtfully examine lead bias or privacy issues research. Conclusions Future should focus improving tasks, well investigating mechanisms brought issues. Considering vast accessibility LLMs, legal, social, technical efforts all needed address promote, improve, regularize application

Language: Английский

Citations

18

Large Language Models for Chatbot Health Advice Studies DOI Creative Commons
Bright Huo,

Amy Boyle,

Nana Marfo

et al.

JAMA Network Open, Journal Year: 2025, Volume and Issue: 8(2), P. e2457879 - e2457879

Published: Feb. 4, 2025

Importance There is much interest in the clinical integration of large language models (LLMs) health care. Many studies have assessed ability LLMs to provide advice, but quality their reporting uncertain. Objective To perform a systematic review examine variability among peer-reviewed evaluating performance generative artificial intelligence (AI)–driven chatbots for summarizing evidence and providing advice inform development Chatbot Assessment Reporting Tool (CHART). Evidence Review A search MEDLINE via Ovid, Embase Elsevier, Web Science from inception October 27, 2023, was conducted with help sciences librarian yield 7752 articles. Two reviewers screened articles by title abstract followed full-text identify primary accuracy AI-driven (chatbot studies). then performed data extraction 137 eligible studies. Findings total were included. Studies examined topics surgery (55 [40.1%]), medicine (51 [37.2%]), care (13 [9.5%]). focused on treatment (91 [66.4%]), diagnosis (60 [43.8%]), or disease prevention (29 [21.2%]). Most (136 [99.3%]) evaluated inaccessible, closed-source did not enough information version LLM under evaluation. All lacked sufficient description characteristics, including temperature, token length, fine-tuning availability, layers, other details. describe prompt engineering phase study. The date querying reported 54 (39.4%) (89 [65.0%]) used subjective means define successful chatbot, while less than one-third addressed ethical, regulatory, patient safety implications LLMs. Conclusions Relevance In this chatbot studies, heterogeneous may CHART standards. Ethical, considerations are crucial as grows

Language: Английский

Citations

5

Beyond the Scalpel: Assessing ChatGPT's potential as an auxiliary intelligent virtual assistant in oral surgery DOI
Ana Suárez, J. Jiménez,

María Llorente de Pedro

et al.

Computational and Structural Biotechnology Journal, Journal Year: 2023, Volume and Issue: 24, P. 46 - 52

Published: Dec. 6, 2023

Language: Английский

Citations

29

Applications and Concerns of ChatGPT and Other Conversational Large Language Models in Healthcare: A Systematic Review (Preprint) DOI Creative Commons

Leyao Wang,

Zhiyu Wan, Congning Ni

et al.

Journal of Medical Internet Research, Journal Year: 2024, Volume and Issue: 26, P. e22769 - e22769

Published: Oct. 4, 2024

The launch of ChatGPT (OpenAI) in November 2022 attracted public attention and academic interest to large language models (LLMs), facilitating the emergence many other innovative LLMs. These LLMs have been applied various fields, including health care. Numerous studies since conducted regarding how use state-of-the-art health-related scenarios.

Language: Английский

Citations

14

Appropriateness of Artificial Intelligence Chatbots in Diabetic Foot Ulcer Management DOI
Makoto Shiraishi,

Haesu Lee,

Koji Kanayama

et al.

The International Journal of Lower Extremity Wounds, Journal Year: 2024, Volume and Issue: unknown

Published: Feb. 28, 2024

Type 2 diabetes is a significant global health concern. It often causes diabetic foot ulcers (DFUs), which affect millions of people and increase amputation mortality rates. Despite existing guidelines, the complexity DFU treatment makes clinical decisions challenging. Large language models such as chat generative pretrained transformer (ChatGPT), are adept at natural processing, have emerged valuable resources in medical field. However, concerns about accuracy reliability information they provide remain. We aimed to assess various artificial intelligence (AI) chatbots, including ChatGPT, providing on DFUs based established guidelines. Seven AI chatbots were asked questions (CQs) Their responses analyzed for terms answers CQs, grade recommendation, level evidence, agreement with reference, verification authenticity references provided by chatbots. The showed mean 91.2% discrepancies noted recommendation evidence. Claude-2 outperformed other number verified (99.6%), whereas ChatGPT had lowest rate reference (66.3%). This study highlights potential tools disseminating demonstrates their high degree answering CQs related DFUs. variability these problems like hallucinations necessitate cautious use further optimization applications. underscores evolving role healthcare importance refining technologies effective decision-making patient education.

Language: Английский

Citations

11

Enhancing clinical decision‐making: Optimizing ChatGPT's performance in hypertension care DOI Creative Commons
Jing Miao, Charat Thongprayoon, Tibor Fülöp

et al.

Journal of Clinical Hypertension, Journal Year: 2024, Volume and Issue: 26(5), P. 588 - 593

Published: April 22, 2024

Language: Английский

Citations

10

Performance of ChatGPT in Answering Clinical Questions on the Practical Guideline of Blepharoptosis DOI
Makoto Shiraishi, Yoko Tomioka,

Ami Miyakuni

et al.

Aesthetic Plastic Surgery, Journal Year: 2024, Volume and Issue: 48(13), P. 2389 - 2398

Published: April 29, 2024

Language: Английский

Citations

10

Evaluating the Reliability of ChatGPT for Health-Related Questions: A Systematic Review DOI Creative Commons
Mohammad Beheshti, Imad Eddine Toubal, Khuder Alaboud

et al.

Informatics, Journal Year: 2025, Volume and Issue: 12(1), P. 9 - 9

Published: Jan. 17, 2025

The rapid advancement of large language models like ChatGPT has significantly impacted natural processing, expanding its applications across various fields, including healthcare. However, there remains a significant gap in understanding the consistency and reliability ChatGPT’s performance different medical domains. We conducted this systematic review according to an LLM-assisted PRISMA setup. high-recall search term “ChatGPT” yielded 1101 articles from 2023 onwards. Through dual-phase screening process, initially automated via subsequently manually by human reviewers, 128 studies were included. covered range specialties, focusing on diagnosis, disease management, patient education. assessment metrics varied, but most compared accuracy against evaluations clinicians or reliable references. In several areas, demonstrated high accuracy, underscoring effectiveness. some contexts revealed lower accuracy. mixed outcomes domains emphasize challenges opportunities integrating AI into certain areas suggests that substantial utility, yet inconsistent all indicates need for ongoing evaluation refinement. This highlights potential improve healthcare delivery alongside necessity continued research ensure reliability.

Language: Английский

Citations

1