Face to face: Comparing ChatGPT with human performance on face matching DOI Creative Commons
Robin S. S. Kramer

Perception, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 5, 2024

ChatGPT's large language model, GPT-4V, has been trained on vast numbers of image-text pairs and is therefore capable processing visual input. This model operates very differently from current state-of-the-art neural networks designed specifically for face perception so I chose to investigate whether ChatGPT could also be applied this domain. With aim, focussed the task matching, that is, deciding two photographs showed same person or not. Across six different tests, demonstrated performance was comparable with human accuracies despite being a domain-general ‘virtual assistant’ rather than specialised tool processing. perhaps surprising result identifies new avenue exploration in field, while further research should explore boundaries ability, along how its errors may relate those made by humans.

Language: Английский

A brief commentary on human–AI attachment and possible impacts on family dynamics DOI
Brandon T. McDaniel, Amanda Coupe, Anna Weston

et al.

Family Relations, Journal Year: 2025, Volume and Issue: unknown

Published: April 21, 2025

ABSTRACT Objective In this brief commentary article, we outline an emerging idea that, as conversational artificial intelligence (CAI) becomes a part of individual's environment and interacts with them, their attachment system may become activated, potentially leading to behaviors—such seeking out the CAI feel safe in times stress—that have typically been reserved for human‐to‐human relationships. We term attachment‐like behavior , but future work must determine if these behaviors are driven by human–AI or something else entirely. Background is technical advancement that cornerstone many everyday tools (e.g., smartphone apps, online chatbots, smart speakers). With generative AI, device affordances systems increasingly complex. For example, AI has allowed more personalization, human‐like dialogue interaction, interpretation generation human emotions. Indeed, ability mimic caring—learning from past interactions individual appearing be emotionally available comforting need. Humans instinctually attachment‐related needs comfort emotional security, therefore, individuals begin met CAI, they seek source safety distress. This leads questions whether truly possible and, so, what might mean family dynamics.

Language: Английский

Citations

1

Large Language Model for Mental Health: A Systematic Review (Preprint) DOI Creative Commons
Zhijun Guo, Alvina G. Lai, Johan H. Thygesen

et al.

Published: Feb. 18, 2024

BACKGROUND Large language models (LLMs) are advanced artificial neural networks trained on extensive datasets to accurately understand and generate natural language. While they have received much attention demonstrated potential in digital health, their application mental particularly clinical settings, has generated considerable debate. OBJECTIVE This systematic review aims critically assess the use of LLMs specifically focusing applicability efficacy early screening, interventions, settings. By systematically collating assessing evidence from current studies, our work analyzes models, methodologies, data sources, outcomes, thereby highlighting challenges present, prospects for use. METHODS Adhering PRISMA (Preferred Reporting Items Systematic Reviews Meta-Analyses) guidelines, this searched 5 open-access databases: MEDLINE (accessed by PubMed), IEEE Xplore, Scopus, JMIR, ACM Digital Library. Keywords used were (<i>mental health</i> OR <i>mental illness</i> disorder</i> <i>psychiatry</i>) AND (<i>large models</i>). study included articles published between January 1, 2017, April 30, 2024, excluded languages other than English. RESULTS In total, 40 evaluated, including 15 (38%) health conditions suicidal ideation detection through text analysis, 7 (18%) as conversational agents, 18 (45%) applications evaluations health. show good effectiveness detecting issues providing accessible, destigmatized eHealth services. However, assessments also indicate that risks associated with might surpass benefits. These include inconsistencies text; production hallucinations; absence a comprehensive, benchmarked ethical framework. CONCLUSIONS examines inherent risks. The identifies several issues: lack multilingual annotated experts, concerns regarding accuracy reliability content, interpretability due “black box” nature LLMs, ongoing dilemmas. clear, framework; privacy issues; overreliance both physicians patients, which could compromise traditional medical practices. As result, should not be considered substitutes professional rapid development underscores valuable aids, emphasizing need continued research area. CLINICALTRIAL PROSPERO CRD42024508617; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=508617

Language: Английский

Citations

7

The promise and pitfalls of generative AI DOI
Monojit Choudhury, Zohar Elyoseph,

Nathanael J. Fast

et al.

Nature Reviews Psychology, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 15, 2025

Language: Английский

Citations

1

A Meta-Analysis of Artificial Intelligence Technologies Use and Loneliness: Examining the Influence of Physical Embodiment, Age Differences, and Effect Direction DOI

Dong Xu,

Jun Xie, He Gong

et al.

Cyberpsychology Behavior and Social Networking, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 5, 2025

Recent research has investigated the connection between artificial intelligence (AI) utilization and feelings of loneliness, yielding inconsistent outcomes. This meta-analysis aims to clarify this relationship by synthesizing data from 47 relevant studies across 21 publications. Findings indicate a generally significant positive correlation AI use loneliness (r = 0.163, p < 0.05). Specifically, interactions with physically embodied are marginally significantly associated decreased -0.266, 0.088), whereas engagement disembodied is linked increased 0.352, 0.001). Among older adults (aged 60 above), positively 0.001), while no observed 0.039, 0.659) in younger individuals 35 below). Furthermore, incorporating attitudes toward AI, study reveals that influence exacerbating outweighs reverse impact, although both directions show relationships. These results enhance understanding how usage relates provide practical insights for addressing through technologies.

Language: Английский

Citations

1

Oculomics: Current Concepts and Evidence DOI Creative Commons
Zhuoting Zhu, Yueye Wang, Ziyi Qi

et al.

Progress in Retinal and Eye Research, Journal Year: 2025, Volume and Issue: unknown, P. 101350 - 101350

Published: March 1, 2025

Language: Английский

Citations

1

Comparing ChatGPT with human judgements of social traits from face photographs DOI Creative Commons
Robin S. S. Kramer

Computers in Human Behavior Artificial Humans, Journal Year: 2025, Volume and Issue: unknown, P. 100156 - 100156

Published: April 1, 2025

Language: Английский

Citations

1

Evaluating and Addressing Demographic Disparities in Medical Large Language Models: A Systematic Review DOI Creative Commons
Mahmud Omar, Vera Sorin, Donald U. Apakama

et al.

Published: Sept. 9, 2024

Abstract Background Large language models (LLMs) are increasingly evaluated for use in healthcare. However, concerns about their impact on disparities persist. This study reviews current research demographic biases LLMs to identify prevalent bias types, assess measurement methods, and evaluate mitigation strategies. Methods We conducted a systematic review, searching publications from January 2018 July 2024 across five databases. included peer-reviewed studies evaluating LLMs, focusing gender, race, ethnicity, age, other factors. Study quality was assessed using the Joanna Briggs Institute Critical Appraisal Tools. Results Our review 24 studies. Of these, 22 (91.7%) identified LLMs. Gender most prevalent, reported 15 of 16 (93.7%). Racial or ethnic were observed 10 11 (90.9%). Only two found minimal no certain contexts. Mitigation strategies mainly prompt engineering, with varying effectiveness. these findings tempered by potential publication bias, as negative results less frequently published. Conclusion Biases various medical domains. While detection is improving, effective still developing. As influence critical decisions, addressing resultant essential ensuring fair AI systems. Future should focus wider range factors, intersectional analyses, non-Western cultural

Language: Английский

Citations

6

Use of generative artificial intelligence (AI) in psychiatry and mental health care: a systematic review DOI Creative Commons
Sara Kolding, Robert M. Lundin, Lasse Hansen

et al.

Acta Neuropsychiatrica, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 14

Published: Nov. 11, 2024

Tools based on generative artificial intelligence (AI) such as ChatGPT have the potential to transform modern society, including field of medicine. Due prominent role language in psychiatry, e.g., for diagnostic assessment and psychotherapy, these tools may be particularly useful within this medical field. Therefore, aim study was systematically review literature AI applications psychiatry mental health.

Language: Английский

Citations

6

The impact of history of depression and access to weapons on suicide risk assessment: a comparison of ChatGPT-3.5 and ChatGPT-4 DOI Creative Commons
Shiri Shinan‐Altman, Zohar Elyoseph, Inbar Levkovich

et al.

PeerJ, Journal Year: 2024, Volume and Issue: 12, P. e17468 - e17468

Published: May 29, 2024

The aim of this study was to evaluate the effectiveness ChatGPT-3.5 and ChatGPT-4 in incorporating critical risk factors, namely history depression access weapons, into suicide assessments. Both models assessed using scenarios that featured individuals with without a weapons. estimated likelihood suicidal thoughts, attempts, serious suicide-related mortality on Likert scale. A multivariate three-way ANOVA analysis Bonferroni post hoc tests conducted examine impact forementioned independent factors (history weapons) these outcome variables. identified as significant factor. demonstrated more nuanced understanding relationship between depression, risk. In contrast, displayed limited insight complex relationship. consistently assigned higher severity ratings variables than did ChatGPT-3.5. highlights potential two models, particularly ChatGPT-4, enhance assessment by considering factors.

Language: Английский

Citations

5

Improving Efficiency Through AI-Powered Customer Engagement by Providing Personalized Solutions in the Banking Industry DOI

Buddhika Nishadi Kaluarachchi,

Darshana Sedera

Advances in marketing, customer relationship management, and e-services book series, Journal Year: 2024, Volume and Issue: unknown, P. 299 - 342

Published: July 26, 2024

Artificial intelligence (AI) is revolutionizing banking by improving client engagement and operational efficiency with personalized solutions. This chapter analyses how AI-powered customer enhances operations customizes AI tools help banks learn preferences behaviors analyzing massive volumes of data, supporting a customer-centric strategy that promotes happiness loyalty. The reviews prominent banks' deployments case studies, addresses data protection, ethics, regulatory compliance, offers advice for seeking competitive advantage. also discusses trends like better credit evaluation, services, fraud protection. Banks can improve provide experiences using AI-driven service marketing. For professionals interested in to create edge, this provides practical tactics, insights, recommendations successful adoption financial services.

Language: Английский

Citations

4