Enhancing or impeding? Exploring the dual impact of anthropomorphism in large language models on user aggression DOI
Yipeng Xi,

Aitong Ji,

Weihua Yu

и другие.

Telematics and Informatics, Год журнала: 2024, Номер 95, С. 102194 - 102194

Опубликована: Окт. 11, 2024

Язык: Английский

Exploring people's perceptions of LLM-generated advice DOI Creative Commons
Joel Wester, S. de Jong, Henning Pohl

и другие.

Computers in Human Behavior Artificial Humans, Год журнала: 2024, Номер 2(2), С. 100072 - 100072

Опубликована: Июнь 7, 2024

When searching and browsing the web, more of information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, relationship advice. Yet, there limited understanding how individuals perceive advice provided by these LLMs. In this paper, explore people's perception LLM-generated advice, what role diverse user characteristics (i.e., personality technology readiness) play in shaping their perception. Further, as difficult to distinguish from human assess perceived creepiness such To investigate this, run exploratory study (N = 91), where participants rate different styles (generated GPT-3.5 Turbo). Notably, our findings suggest that who identify agreeable tend like find it useful. with higher technological insecurity are likely follow useful, deem friend could have given Lastly, see 'skeptical' style was rated most unpredictable, 'whimsical' least malicious—indicating LLM influence perceptions. Our results also provide overview considerations likelihood, receptiveness, they seek digital assistants. Based results, design takeaways outline future research directions further inform support applications targeting people expectations needs.

Язык: Английский

Процитировано

12

“As an AI language model, I cannot”: Investigating LLM Denials of User Requests DOI Creative Commons
Joel Wester, Tim Schrills, Henning Pohl

и другие.

Опубликована: Май 11, 2024

Users ask large language models (LLMs) to help with their homework, for lifestyle advice, or support in making challenging decisions. Yet LLMs are often unable fulfil these requests, either as a result of technical inabilities policies restricting responses. To investigate the effect denying user we evaluate participants' perceptions different denial styles. We compare specific styles (baseline, factual, diverting, and opinionated) across two studies, respectively focusing on LLM's limitations social policy restrictions. Our results indicate significant differences users' denials between The baseline denial, which provided participants brief without any motivation, was rated significantly higher frustration lower usefulness, appropriateness, relevance. In contrast, found that generally appreciated diverting style. provide design recommendations LLM better meet peoples' expectations.

Язык: Английский

Процитировано

10

Navigating Emotions Through Art: Recommendations for Designing Art-Therapy Based Chatbots for Trauma-Impacted Youth DOI

Christine Wu,

Ila Kumar, Rosalind W. Picard

и другие.

Опубликована: Апрель 23, 2025

Язык: Английский

Процитировано

0

Initiating the Global AI Dialogues: Laypeople Perspectives on the Future Role of genAI in Society from Nigeria, Germany and Japan DOI
Michel Hohendanner, Chiara Ullstein, Bukola A. Onyekwelu

и другие.

Опубликована: Апрель 24, 2025

Язык: Английский

Процитировано

0

Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT DOI

Mohammad Ronagh Nikghalb,

Jinghui Cheng

Proceedings of the ACM on Human-Computer Interaction, Год журнала: 2025, Номер 9(2), С. 1 - 23

Опубликована: Май 2, 2025

In an era of AI's growing capabilities and influences, recent advancements are reshaping HCI CSCW's view AI. Playful interactions emerged as important way for users to make sense the ever-changing AI technologies, yet remained underexamined. We target this gap by investigating playful exhibited a popular technology, ChatGPT. Through thematic analysis 372 user-generated posts on ChatGPT subreddit, we found that more than half (54%) user discourse revolved around interactions. The further allowed us construct preliminary framework describe these interactions, categorizing them into six types: reflecting, jesting, imitating, challenging, tricking, contriving; each included sub-categories. This study contributes CSCW identifying diverse ways engage in with It examines how can help understand agency, shape human-AI relationships, provide insights designing systems.

Язык: Английский

Процитировано

0

Enhancing or impeding? Exploring the dual impact of anthropomorphism in large language models on user aggression DOI
Yipeng Xi,

Aitong Ji,

Weihua Yu

и другие.

Telematics and Informatics, Год журнала: 2024, Номер 95, С. 102194 - 102194

Опубликована: Окт. 11, 2024

Язык: Английский

Процитировано

0