Introduction DOI
Marco Cascella

Опубликована: Янв. 1, 2024

Assessing the readability, quality and reliability of responses produced by ChatGPT, Gemini, and Perplexity regarding most frequently asked keywords about low back pain DOI Creative Commons
Erkan Özduran, Volkan Hancı,

Yüksel Erkin

и другие.

PeerJ, Год журнала: 2025, Номер 13, С. e18847 - e18847

Опубликована: Янв. 22, 2025

Background Patients who are informed about the causes, pathophysiology, treatment and prevention of a disease better able to participate in procedures event illness. Artificial intelligence (AI), which has gained popularity recent years, is defined as study algorithms that provide machines with ability reason perform cognitive functions, including object word recognition, problem solving decision making. This aimed examine readability, reliability quality responses frequently asked keywords low back pain (LBP) given by three different AI-based chatbots (ChatGPT, Perplexity Gemini), popular applications online information presentation today. Methods All AI were 25 most used related LBP determined help Google Trend. In order prevent possible bias could be created sequential processing answers chatbots, was designed providing input from users (EO, VH) for each keyword. The readability Simple Measure Gobbledygook (SMOG), Flesch Reading Ease Score (FRES) Gunning Fog (GFG) scores. Quality assessed using Global (GQS) Ensuring Information (EQIP) score. Reliability determining DISCERN Journal American Medical Association (JAMA) scales. Results first detected result Trend search “Lower Back Pain”, “ICD 10 Low “Low Pain Symptoms”. It all higher than recommended 6th grade level ( p < 0.001). EQIP, JAMA, modified GQS score evaluation, found have significantly scores other Conclusion been difficult read assessment. clear when new introduced, they can guidance patients increased clarity text quality. inspiration future studies on improving chatbots.

Язык: Английский

Процитировано

3

Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions DOI Creative Commons
Malik Sallam,

Khaled Al‐Salahat,

Huda Eid

и другие.

Advances in Medical Education and Practice, Год журнала: 2024, Номер Volume 15, С. 857 - 871

Опубликована: Сен. 1, 2024

Artificial intelligence (AI) chatbots excel in language understanding and generation. These models can transform healthcare education practice. However, it is important to assess the performance of such AI various topics highlight its strengths possible limitations. This study aimed evaluate ChatGPT (GPT-3.5 GPT-4), Bing, Bard compared human students at a postgraduate master's level Medical Laboratory Sciences.

Язык: Английский

Процитировано

6

Language discrepancies in the performance of generative artificial intelligence models: an examination of infectious disease queries in English and Arabic DOI Creative Commons
Malik Sallam,

Kholoud Al-Mahzoum,

Omaima Alshuaib

и другие.

BMC Infectious Diseases, Год журнала: 2024, Номер 24(1)

Опубликована: Авг. 8, 2024

Assessment of artificial intelligence (AI)-based models across languages is crucial to ensure equitable access and accuracy information in multilingual contexts. This study aimed compare AI model efficiency English Arabic for infectious disease queries.

Язык: Английский

Процитировано

4

The performance of OpenAI ChatGPT-4 and Google Gemini in virology multiple-choice questions: a comparative analysis of English and Arabic responses DOI Creative Commons
Malik Sallam,

Kholoud Al-Mahzoum,

Rawan Ahmad Almutawaa

и другие.

BMC Research Notes, Год журнала: 2024, Номер 17(1)

Опубликована: Сен. 3, 2024

Язык: Английский

Процитировано

4

Introduction DOI
Marco Cascella

Опубликована: Янв. 1, 2024

Процитировано

0