AI in Accounting: Can AI Models Like ChatGPT and Gemini Successfully Pass the Portuguese Chartered Accountant Exam? DOI
Agostinho Sousa Pinto, António Abreu, Eusébio Costa

et al.

Lecture notes in networks and systems, Journal Year: 2024, Volume and Issue: unknown, P. 429 - 438

Published: Dec. 16, 2024

Language: Английский

Comparison of ChatGPT, Gemini, and Le Chat with physician interpretations of medical laboratory questions from an online health forum DOI Creative Commons
A. Meyer,

Ari Soleman,

Janik Riese

et al.

Clinical Chemistry and Laboratory Medicine (CCLM), Journal Year: 2024, Volume and Issue: 62(12), P. 2425 - 2434

Published: May 28, 2024

Abstract Objectives Laboratory medical reports are often not intuitively comprehensible to non-medical professionals. Given their recent advancements, easier accessibility and remarkable performance on licensing exams, patients therefore likely turn artificial intelligence-based chatbots understand laboratory results. However, empirical studies assessing the efficacy of these in responding real-life patient queries regarding medicine scarce. Methods Thus, this investigation included 100 inquiries from an online health forum, specifically addressing Complete Blood Count interpretation. The aim was evaluate proficiency three (ChatGPT, Gemini Le Chat) against responses certified physicians. Results findings revealed that chatbots’ interpretations results were inferior those While exhibited a higher degree empathetic communication, they frequently produced erroneous or overly generalized complex questions. appropriateness chatbot ranged 51 64 %, with 22 33 % overestimating conditions. A notable positive aspect consistent inclusion disclaimers its nature recommendations seek professional advice. Conclusions real highlight dangerous dichotomy – perceived trustworthiness potentially obscuring factual inaccuracies. growing inclination towards self-diagnosis using AI platforms, further research improvement is imperative increase patients’ awareness avoid future burdens healthcare system.

Language: Английский

Citations

7

Evaluating the Efficacy of Artificial Intelligence-Driven Chatbots in Addressing Queries on Vernal Conjunctivitis DOI Open Access
Muhammad Saad,

Muhammad A Moqeet,

Hassan Mansoor

et al.

Cureus, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 26, 2025

Background Vernal keratoconjunctivitis (VKC) is a recurrent allergic eye disease that requires accurate patient education to ensure proper management. AI-driven chatbots, such as Google Gemini Advanced (Mountain View, California, US), are increasingly being explored potential tools for providing medical information. This study evaluates the accuracy, reliability, and clinical applicability of in addressing VKC-related queries. Objective To assess performance delivering medically relevant information about VKC evaluate its reliability based on expert ratings. Methods A total 125 responses generated by 25 questions were assessed two independent cornea specialists. Responses rated completeness, harm using 5-point Likert scale (1-5). Inter-rater was measured Cronbach's alpha. categorized into highly (score 5), minor inconsistencies 4), inaccurate (scores 1-3). Results demonstrated high inter-rater (Cronbach's alpha = 0.92, 95% CI: 0.87-0.94). Of responses, 108 (86.4%) 5) while 17 (13.6%) had 4) but posed no harm. No classified or potentially harmful. The combined mean score 4.88 ± 0.31, reflecting strong agreement between raters. chatbot consistently provided reliable across diagnostic, treatment, prognosis-related queries, with gaps complex grading treatment-related discussions. Discussion findings support use chatbots like ophthalmology. exhibited accuracy consistency, particularly general However, areas improvement remain, especially detailed guidance treatment protocols ensuring completeness questions. Conclusion demonstrates VKC, making it valuable tool education. While consistent generally accurate, oversight remains necessary refine AI-generated content applications. Further research needed enhance chatbots' ability provide nuanced advice integrate them safely ophthalmic decision-making.

Language: Английский

Citations

0

Comparative Analysis of ChatGPT and Google Gemini in Generating Patient Educational Resources on Cardiac Health: A Focus on Exercise-Induced Arrhythmia, Sleep Habits, and Dietary Habits DOI Open Access

Nithin Karnan,

Sumaiya Fatima,

Palwasha Nasir

et al.

Cureus, Journal Year: 2025, Volume and Issue: unknown

Published: March 18, 2025

Language: Английский

Citations

0

Artificial intelligence in academic writing: Enhancing or replacing human expertise? DOI

Ria Resti Fauziah,

Ari Metalin Ika Puspita,

Ivo Yuliana

et al.

Journal of Clinical Neuroscience, Journal Year: 2025, Volume and Issue: unknown, P. 111193 - 111193

Published: March 1, 2025

Language: Английский

Citations

0

Analysis of Patient Education Guides Generated by ChatGPT and Gemini on Common Anti-diabetic Drugs: A Cross-Sectional Study DOI Open Access
Jude Saji,

Aswini Balagangatharan,

Sarita Bajaj

et al.

Cureus, Journal Year: 2025, Volume and Issue: unknown

Published: March 25, 2025

Language: Английский

Citations

0

The performance of OpenAI ChatGPT-4 and Google Gemini in virology multiple-choice questions: a comparative analysis of English and Arabic responses DOI Creative Commons
Malik Sallam,

Kholoud Al-Mahzoum,

Rawan Ahmad Almutawaa

et al.

BMC Research Notes, Journal Year: 2024, Volume and Issue: 17(1)

Published: Sept. 3, 2024

Language: Английский

Citations

4

Beyond Green Labels: Assessing Mutual Funds’ ESG Commitments through Large Language Models DOI
Katherine Wood, Chaehyun Pyun,

Hieu Pham

et al.

Finance research letters, Journal Year: 2025, Volume and Issue: 74, P. 106713 - 106713

Published: Jan. 5, 2025

Language: Английский

Citations

0

Chat GPT 4o VS Residents: French Language Evaluation in Ophthalmology DOI Creative Commons
Leah Attal, Elad Shvartz, Nakhoul Nakhoul

et al.

Deleted Journal, Journal Year: 2025, Volume and Issue: 2(1), P. 100104 - 100104

Published: Jan. 31, 2025

Language: Английский

Citations

0

Using Large Language Models in the Diagnosis of Acute Cholecystitis: Assessing Accuracy and Guidelines Compliance DOI
Marta Goglia,

Arianna Cicolani,

Francesco Maria Carrano

et al.

The American Surgeon, Journal Year: 2025, Volume and Issue: unknown

Published: March 12, 2025

Background Large language models (LLMs) are advanced tools capable of understanding and generating human-like text. This study evaluated the accuracy several commercial LLMs in addressing clinical questions related to diagnosis management acute cholecystitis, as outlined Tokyo Guidelines 2018 (TG18). We assessed their congruence with expert panel discussions presented guidelines. Methods ChatGPT4.0, Gemini Advanced, GPTo1-preview on ten questions. Eight derived from TG18, two were formulated by authors. Two authors independently rated each LLM’s responses a four-point scale: (1) accurate comprehensive, (2) but not (3) partially accurate, inaccurate, (4) entirely inaccurate. A third author resolved any scoring discrepancies. Then, we comparatively analyzed performance ChatGPT4.0 against newer large (LLMs), specifically Advanced GPTo1-preview, same set delineate respective strengths limitations. Results provided consistent for 90% It delivered “accurate comprehensive” answers 4/10 (40%) 5/10 (50%). One response (10%) was “partially inaccurate.” demonstrated higher some yielded similar percentage inaccurate” responses. Notably, neither model produced “entirely answers. Discussion LLMs, such ChatGPT demonstrate potential accurately regarding cholecystitis. With awareness limitations, careful implementation, ongoing refinement, could serve valuable resources physician education patient information, potentially improving decision-making future.

Language: Английский

Citations

0

Generative AI and Prompt Engineering: Transforming Rockburst Prediction in Underground Construction DOI Creative Commons
Muhammad Kamran, Muhammad Faizan, Shuhong Wang

et al.

Buildings, Journal Year: 2025, Volume and Issue: 15(8), P. 1281 - 1281

Published: April 14, 2025

The construction industry is undergoing a transformative shift through automation, with advancements in Generative AI (GenAI) and prompt engineering enhancing safety efficiency, particularly high-risk fields like underground construction, geotechnics, mining. In GenAI-powered prompts are revolutionizing practices by enabling from reactive to predictive approaches, leading design, project planning, site management. This study explores the use of Google Gemini, recent advancement GenAI, for prediction rockburst intensity levels construction. Python programming language Gemini tool combined generate that incorporate essential variables related rockburst. A comprehensive database 93 documented cases compiled. Subsequently, systematic method established involves categorization data visualization factor analysis order identify reduced number unobservable underlying factors. Furthermore, K-means clustering utilized patterns. gradient boosting classifier then employed predict results demonstrate GenAI offers an effective approach accurately predicting events, achieving accuracy rate 89 percent. Through modeling experts can proactively evaluate likelihood rockburst, allowing improved risk management, optimized excavation strategies, enhanced protocols. enables automation complex analyses provides powerful real-time decision-making insights, offering significant benefits industries reliant on However, despite considerable potential sector, challenges output accuracy, dynamic nature projects, need human oversight must be carefully addressed ensure implementation.

Language: Английский

Citations

0