Face to face: Comparing ChatGPT with human performance on face matching DOI Creative Commons
Robin S. S. Kramer

Perception, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 5, 2024

ChatGPT's large language model, GPT-4V, has been trained on vast numbers of image-text pairs and is therefore capable processing visual input. This model operates very differently from current state-of-the-art neural networks designed specifically for face perception so I chose to investigate whether ChatGPT could also be applied this domain. With aim, focussed the task matching, that is, deciding two photographs showed same person or not. Across six different tests, demonstrated performance was comparable with human accuracies despite being a domain-general ‘virtual assistant’ rather than specialised tool processing. perhaps surprising result identifies new avenue exploration in field, while further research should explore boundaries ability, along how its errors may relate those made by humans.

Language: Английский

Comparing the Perspectives of Generative AI, Mental Health Experts, and the General Public on Schizophrenia Recovery: Case Vignette Study DOI Creative Commons
Zohar Elyoseph, Inbar Levkovich

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e53043 - e53043

Published: March 18, 2024

Abstract Background The current paradigm in mental health care focuses on clinical recovery and symptom remission. This model’s efficacy is influenced by therapist trust patient potential the depth of therapeutic relationship. Schizophrenia a chronic illness with severe symptoms where possibility matter debate. As artificial intelligence (AI) becomes integrated into field, it important to examine its ability assess major psychiatric disorders such as schizophrenia. Objective study aimed evaluate large language models (LLMs) comparison professionals prognosis schizophrenia without professional treatment long-term positive negative outcomes. Methods Vignettes were inputted LLMs interfaces assessed 10 times 4 AI platforms: ChatGPT-3.5, ChatGPT-4, Google Bard, Claude. A total 80 evaluations collected benchmarked against existing norms analyze what (general practitioners, psychiatrists, psychologists, nurses) general public think about outcomes interventions. Results For treatment, ChatGPT-3.5 was notably pessimistic, whereas Claude, Bard aligned views but differed from public. All believed untreated would remain static or worsen treatment. outcomes, ChatGPT-4 Claude predicted more than ChatGPT-3.5. pessimistic ChatGPT-4. Conclusions finding that 3 out closely predictions when considering “with treatment” condition demonstration this technology providing prognosis. assessment disturbing since may reduce motivation patients start persist for Overall, although hold promise augmenting care, their application necessitates rigorous validation harmonious blend human expertise.

Language: Английский

Citations

30

The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy DOI Creative Commons
Yuval Haber, Inbar Levkovich, Dorit Hadar‐Shoval

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e54781 - e54781

Published: April 18, 2024

This paper explores a significant shift in the field of mental health general and psychotherapy particular following generative artificial intelligence's new capabilities processing generating humanlike language. Following Freud, this lingo-technological development is conceptualized as "fourth narcissistic blow" that science inflicts on humanity. We argue blow has potentially dramatic influence perceptions human society, interrelationships, self. should, accordingly, expect changes therapeutic act emergence what we term third psychotherapy. The introduction an marks critical juncture, prompting us to ask important core questions address two basic elements thinking, namely, transparency autonomy: (1) What presence therapy relationships? (2) How does it reshape our perception ourselves interpersonal dynamics? (3) remains irreplaceable at therapy? Given ethical implications arise from these questions, proposes can be valuable asset when applied with insight consideration, enhancing but not replacing touch therapy.

Language: Английский

Citations

27

Generative AI in Industrial Revolution: A Comprehensive Research on Transformations, Challenges, and Future Directions DOI Creative Commons

Xiang Yafei,

Yichao Wu,

Jintong Song

et al.

Journal of Knowledge Learning and Science Technology ISSN 2959-6386 (online), Journal Year: 2024, Volume and Issue: 3(2), P. 11 - 20

Published: Feb. 27, 2024

The advent of generative artificial intelligence (AI) technologies heralds a new era in industrial innovation, offering unprecedented capabilities for content creation, predictive analytics, and automation. This paper delves into the transformative potential AI across key sectors, emphasizing its role catalyzing technological advancements, enhancing operational efficiencies, fostering sustainable practices. By exploring technical characteristics, developmental trajectory, application scenarios AI, alongside critical examination limitations ethical considerations, this study aims to provide comprehensive understanding how is reshaping landscape automotive, manufacturing, energy industries.

Language: Английский

Citations

19

Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values DOI Creative Commons
Dorit Hadar‐Shoval, Kfir Asraf, Yonathan Mizrachi

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e55988 - e55988

Published: March 8, 2024

Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs guide decision-making have ethical importance. Schwartz's theory of basic (STBV) provides a framework quantifying cultural value orientations and has shown utility examining in contexts, including cultural, diagnostic, therapist-client dynamics.

Language: Английский

Citations

17

An Ethical Perspective on The Democratization of Mental Health with Generative Artificial Intelligence (Preprint) DOI Creative Commons
Zohar Elyoseph, Tamar Gur, Yuval Haber

et al.

Published: March 2, 2024

UNSTRUCTURED Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides an ethical perspective on utilizing Generative Artificial Intelligence (GenAI) for democratization mental health knowledge practice. It explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as Large Language Models (LLMs). The highlights why represent new phase in movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing AI based applications. tool evaluates both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient centered.

Language: Английский

Citations

12

An Ethical Perspective on the Democratization of Mental Health With Generative AI DOI Creative Commons
Zohar Elyoseph, Tamar Gur, Yuval Haber

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e58011 - e58011

Published: July 24, 2024

Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides sociohistorical perspective for theme issue "Responsible Design, Integration, Use Generative AI in Mental Health." It evaluates ethical considerations using generative artificial intelligence (GenAI) democratization mental health knowledge practice. explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as language models. The highlights why represent new phase movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing intelligence-based applications. tool both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient-centered.

Language: Английский

Citations

12

Editorial: Responsible Design, Integration, and Use of Generative AI in Mental Health (Preprint) DOI Creative Commons
Oren Asman, John Torous, Amir Tal

et al.

JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e70439 - e70439

Published: Jan. 6, 2025

Abstract Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration deliberation perhaps governance. This is the first published theme issue focused on GenAI health. It brings together evidence insights GenAI’s capabilities, such as emotion recognition, therapy-session summarization, risk assessment, while highlighting sensitive nature of health data need rigorous validation. Contributors discuss how bias, alignment with human values, transparency, empathy must be carefully addressed to ensure ethically grounded, intelligence–assisted care. By proposing conceptual frameworks; best practices; regulatory approaches, including ethics care preservation socially important humanistic elements, this underscores that can complement, rather than replace, vital role clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, technologists essential.

Language: Английский

Citations

2

Fusing ChatGPT and Human Decisions in Unfamiliar Face Matching DOI Creative Commons
Robin S. S. Kramer

Applied Cognitive Psychology, Journal Year: 2025, Volume and Issue: 39(2)

Published: Feb. 25, 2025

ABSTRACT Unfamiliar face matching involves deciding whether two images depict the same person or different people. Individual performance can be error‐prone but is improved by aggregating (fusing) responses of participant pairs. With advances in automated facial recognition systems (AFR), fusing human and algorithm also leads to improvements over individuals working alone. In current work, I investigated ChatGPT could serve as this fusion. Using a common test, found that fusion individual with those provided increased comparison both alone simulated This pattern results was evident when participants responded either using rating scale (Experiment 1) binary decision associated confidence 2). Taken together, these findings demonstrate potential utility daily identification contexts where state‐of‐the‐art AFR may not available.

Language: Английский

Citations

2

Evaluating and addressing demographic disparities in medical large language models: a systematic review DOI Creative Commons
Mahmud Omar, Vera Sorin,

Reem Agbareia

et al.

International Journal for Equity in Health, Journal Year: 2025, Volume and Issue: 24(1)

Published: Feb. 26, 2025

Abstract Background Large language models are increasingly evaluated for use in healthcare. However, concerns about their impact on disparities persist. This study reviews current research demographic biases large to identify prevalent bias types, assess measurement methods, and evaluate mitigation strategies. Methods We conducted a systematic review, searching publications from January 2018 July 2024 across five databases. included peer-reviewed studies evaluating models, focusing gender, race, ethnicity, age, other factors. Study quality was assessed using the Joanna Briggs Institute Critical Appraisal Tools. Results Our review 24 studies. Of these, 22 (91.7%) identified biases. Gender most prevalent, reported 15 of 16 (93.7%). Racial or ethnic were observed 10 11 (90.9%). Only two found minimal no certain contexts. Mitigation strategies mainly prompt engineering, with varying effectiveness. these findings tempered by potential publication bias, as negative results less frequently published. Conclusion Biases various medical domains. While detection is improving, effective still developing. As LLMs influence critical decisions, addressing resultant essential ensuring fair artificial intelligence systems. Future should focus wider range factors, intersectional analyses, non-Western cultural Graphic

Language: Английский

Citations

2

The effect of incorporating large language models into the teaching on critical thinking disposition: An “AI + Constructivism Learning Theory” attempt DOI
Peng Wang, K. Yin, Mingzhu Zhang

et al.

Education and Information Technologies, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 4, 2025

Language: Английский

Citations

1