Disagreements in Medical Ethics Question Answering Between Large Language Models and Physicians DOI Creative Commons
Shelly Soffer,

Dafna Nesselroth,

Keren Pragier

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 15, 2024

Medical ethics is inherently complex, shaped by a broad spectrum of opinions, experiences, and cultural perspectives. The integration large language models (LLMs) in healthcare new requires an understanding their consistent adherence to ethical standards.

Language: Английский

AI language model rivals expert ethicist in perceived moral expertise DOI Creative Commons
Danica Wilbanks,

Debanjan Mondal,

Niket Tandon

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Feb. 3, 2025

People view AI as possessing expertise across various fields, but the perceived quality of AI-generated moral remains uncertain. Recent work suggests that large language models (LLMs) perform well on tasks designed to assess alignment, reflecting judgments with relatively high accuracy. As LLMs are increasingly employed in decision-making roles, there is a growing expectation for them offer not just aligned also demonstrate sound reasoning. Here, we advance Moral Turing Test and find Americans rate ethical advice from GPT-4o slightly more moral, trustworthy, thoughtful, correct than popular New York Times column, The Ethicist. Participants GPT surpassing both representative sample renowned ethicist delivering justifications advice, suggesting people may LLM outputs viable sources expertise. This might see valuable complements human guidance decision-making. It underscores importance carefully programming guidelines LLMs, considering their potential influence users'

Language: Английский

Citations

1

Large Language Models as Moral Experts? GPT-4o Outperforms Expert Ethicist in Providing Moral Guidance DOI Open Access
Danica Wilbanks,

Debanjan Mondal,

Niket Tandon

et al.

Published: May 29, 2024

AI has demonstrated expertise across various fields, but its potential as a moral expert remains unclear. Recent work suggests that Large Language Models (LLMs) can reflect judgments with high accuracy. But LLMs are increasingly used in complex decision-making roles, true requires not just aligned also clear and trustworthy reasoning. Here, we advance on the Moral Turing Test find advice from GPT-4o is rated more moral, trustworthy, thoughtful, correct than of popular The New York Times column, Ethicist. GPT models outperformed both representative sample Americans renowned ethicist providing explanations advice, suggesting have, some respects, achieved level expertise. present highlights importance carefully programming ethical guidelines LLMs, considering their to sway users' More promisingly, it could complement human guidance decision-making.

Language: Английский

Citations

6

Augmenting intensive care unit nursing practice with generative AI: A formative study of diagnostic synergies using simulation‐based clinical cases DOI Creative Commons
Chedva Levin,

Moriya Suliman,

Е. К. Наими

et al.

Journal of Clinical Nursing, Journal Year: 2024, Volume and Issue: unknown

Published: Aug. 5, 2024

Abstract Background As generative artificial intelligence (GenAI) tools continue advancing, rigorous evaluations are needed to understand their capabilities relative experienced clinicians and nurses. The aim of this study was objectively compare the diagnostic accuracy response formats ICU nurses versus various GenAI models, with a qualitative interpretation quantitative results. Methods This formative utilized four written clinical scenarios representative real patient cases simulate challenges. were developed by expert underwent validation against current literature. Seventy‐four participated in simulation‐based assessment involving scenarios. Simultaneously, we asked ChatGPT‐4 Claude‐2.0 provide initial assessments treatment recommendations for same responses from then scored certified accuracy, completeness response. Results Nurses consistently achieved higher than AI across open‐ended scenarios, though certain models matched or exceeded human performance on standardized cases. Reaction times also diverged substantially. Qualitative format differences emerged such as concision verbosity. Variations system highlighted generalizability Conclusions While demonstrated valuable skills, outperformed domains requiring holistic judgement. Continued development strengthen generalized decision‐making abilities is warranted before autonomous integration. Response interfaces should consider leveraging distinct strengths. Rigorous mixed methods research diverse stakeholders can help iteratively inform safe, beneficial human‐GenAI partnerships centred experience‐guided care augmentation. Relevance Clinical Practice mixed‐methods simulation provides insights into optimizing collaborative nursing knowledge support intensive care. findings guide explainable decision tailored critical environments. Patient Public Contribution Patients public not involved design implementation analysis data.

Language: Английский

Citations

4

Meta-learning contributes to cultivation of wisdom in moral domains: Implications of recent artificial intelligence research and educational considerations DOI
Hyemin Han

International Journal of Ethics Education, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 20, 2025

Language: Английский

Citations

0

Treating Differently or Equality: A Study Exploring Attitudes Towards AI Moral Advisors DOI
Liao Yiming, T. Y. Wang

Technology in Society, Journal Year: 2025, Volume and Issue: unknown, P. 102862 - 102862

Published: March 1, 2025

Language: Английский

Citations

0

Robots as Moral Persons: Exploring AI Ethics in Adrian Tchaikovsky's Service Model DOI
Kevin T. Jackson

Journal of Business Ethics, Journal Year: 2025, Volume and Issue: unknown

Published: April 2, 2025

Language: Английский

Citations

0

Psychomatics—A Multidisciplinary Framework for Understanding Artificial Minds DOI
Giuseppe Riva, Fabrizia Mantovani, Brenda K. Wiederhold

et al.

Cyberpsychology Behavior and Social Networking, Journal Year: 2024, Volume and Issue: unknown

Published: Aug. 29, 2024

Although large language models (LLMs) and other artificial intelligence systems demonstrate cognitive skills similar to humans, such as concept learning acquisition, the way they process information fundamentally differs from biological cognition. To better understand these differences, this article introduces Psychomatics, a multidisciplinary framework bridging science, linguistics, computer science. It aims delve deeper into high-level functioning of LLMs, focusing specifically on how LLMs acquire, learn, remember, use produce their outputs. achieve goal, Psychomatics will rely comparative methodology, starting theory-driven research question-is development different in humans LLMs?-drawing parallels between systems. Our analysis shows can map manipulate complex linguistic patterns training data. Moreover, follow Grice's Cooperative principle provide relevant informative responses. However, human cognition draws multiple sources meaning, including experiential, emotional, imaginative facets, which transcend mere processing are rooted our social developmental trajectories. current lack physical embodiment, reducing ability make sense intricate interplay perception, action, that shapes understanding expression. Ultimately, holds potential yield transformative insights nature language, cognition, intelligence, both biological. by drawing processes, inform more robust human-like

Language: Английский

Citations

3

Using ChatGPT as a Lesson Planning Assistant with Preservice Secondary Mathematics Teachers DOI Creative Commons

Theresa J. Gurl,

Mara P. Markinson,

Alice F. Artzt

et al.

Digital Experiences in Mathematics Education, Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 22, 2024

The purpose of this qualitative study was to examine the use ChatGPT as a lesson-planning assistant with preservice teachers (PSTs) secondary mathematics. PSTs asked solve mathematics problem and assist lesson planning in microteaching context methods course. first developed parts their plan individually, then do same thing. reflected on output generated by ChatGPT. They used any way they saw fit throughout process. An analysis PSTs' reflective statements about ChatGPT's revealed importance critical evaluative reflection. Although were generally accurate assessment pedagogical output, noting that suggested lessons teacher-centered repetitive, indicating little knowledge students' needs; less mathematical often attributing incorrect solutions "different approach" problem. findings have implications for value process when using other chatbots solving problems, suggest framework examination may be necessary age GAI.

Language: Английский

Citations

2

Artificial moral intelligence and computability: an Aristotelian perspective DOI
Christos Kyriacou

AI and Ethics, Journal Year: 2024, Volume and Issue: unknown

Published: Sept. 16, 2024

Language: Английский

Citations

1

Turing’s Test vs the Moral Turing Test DOI
Diane Proudfoot

Philosophy & Technology, Journal Year: 2024, Volume and Issue: 37(4)

Published: Nov. 25, 2024

Language: Английский

Citations

1