The Influence of Generative AI on Interpersonal Communication Dynamics DOI
Sumeyya Akdilek, Ibrahim Akdilek, Narissra M. Punyanunt-Carter

et al.

Advances in educational technologies and instructional design book series, Journal Year: 2024, Volume and Issue: unknown, P. 167 - 190

Published: Feb. 12, 2024

In this comprehensive exploration, the interaction between generative AI and interpersonal communication is examined. The initial sections delve into characteristics limitations of AI-generated responses, highlighting challenges context non-verbal cues interpretation. potential for AI-driven skill development presented. discussion progresses to address changing dynamics in classroom, contrasting traditional training with AI-augmented methods. efficacy group discussions role-plays assessed, a central focus on whether augments or diminishes human connections. final explore reshaping our understanding effective necessity educators uphold element while leveraging AI's skill-enhancing capabilities. This review offers insights evolving landscape communication, shedding light its opportunities, challenges, path forward.

Language: Английский

Students’ perceptions of using ChatGPT in a physics class as a virtual tutor DOI Creative Commons
Lu Ding, Tong Li, Shiyan Jiang

et al.

International Journal of Educational Technology in Higher Education, Journal Year: 2023, Volume and Issue: 20(1)

Published: Dec. 22, 2023

Abstract The latest development of Generative Artificial Intelligence (GenAI), particularly ChatGPT, has drawn the attention educational researchers and practitioners. We have witnessed many innovative uses ChatGPT in STEM classrooms. However, studies regarding students’ perceptions as a virtual tutoring tool education are rare. current study investigated undergraduate using physics class an assistant for addressing questions. Specifically, examined accuracy answering questions, relationship between trust levels answer accuracy, influence on ChatGPT. Our finding indicates that despite inaccuracy GenAI question answering, most students its ability to provide correct answers. Trust is also associated with GenAI. In addition, this sheds light misconceptions toward provides suggestions future considerations AI literacy teaching research.

Language: Английский

Citations

67

Green power of virtual influencer: The role of virtual influencer image, emotional appeal, and product involvement DOI
Kan Jiang, Junyuan Zheng, Shaohua Luo

et al.

Journal of Retailing and Consumer Services, Journal Year: 2023, Volume and Issue: 77, P. 103660 - 103660

Published: Dec. 12, 2023

Language: Английский

Citations

42

Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural Analysis DOI Creative Commons
Zihan Liu, Han Li, Anfan Chen

et al.

Published: May 11, 2024

Conversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media. While previous research has examined public perceptions of AI in general, there is a notable lack focused CAs, with fewer investigations cultural variations CA perceptions. To address this gap, study used computational methods to analyze about one million media surrounding CAs and compared people's discourses the US China. We find Chinese participants tended view hedonically, perceived voice-based physically embodied as warmer more competent, generally expressed positive emotions. In contrat, saw functionally, an ambivalent attitude. Warm perception was key driver emotions toward both countries. discussed practical implications for designing contextually sensitive user-centric resonate various users' preferences needs.

Language: Английский

Citations

18

Chatbots as Social Companions DOI
Michael S. A. Graziano, Rose E. Guingrich

Oxford University Press eBooks, Journal Year: 2025, Volume and Issue: unknown

Published: March 19, 2025

Abstract As artificial intelligence (AI) becomes more widespread, one question that arises is how human–AI interaction might impact human–human interaction. Chatbots, for example, are increasingly used as social companions, and while much speculated, little known empirically about their use impacts human relationships. A common hypothesis relationships with companion chatbots detrimental to health by harming or replacing interaction, but this may be too simplistic, especially considering the needs of users preexisting To understand health, study evaluates people who regularly did not them. Contrary expectations, chatbot indicated these were beneficial whereas non-users viewed them harmful. Another assumption perceive conscious, humanlike AI disturbing threatening. Among both non-users, however, results suggest opposite: perceiving conscious correlated positive opinions pronounced benefits. Detailed accounts from suggested aid supplying reliable safe interactions, without necessarily relationships, depend on users’ they likeness mind in chatbot.

Language: Английский

Citations

3

Do chatbots establish “humanness” in the customer purchase journey? An investigation through explanatory sequential design DOI Creative Commons
Yogesh K. Dwivedi, Janarthanan Balakrishnan,

Abdullah M. Baabdullah

et al.

Psychology and Marketing, Journal Year: 2023, Volume and Issue: 40(11), P. 2244 - 2271

Published: Aug. 19, 2023

Abstract Chatbots incorporate various behavioral and psychological marketing elements to satisfy customers at stages of their purchase journey. This research follows the foundations Elaboration Likelihood Model (ELM) examines how cognitive peripheral cues impact experiential dimensions, leading chatbot user recommendation intentions. The study introduced warmth competence as mediating variables in both postpurchase stages, utilizing a robust explanatory sequential mixed‐method design. researchers tested validated proposed conceptual model using 3 × factorial design, collecting 354 responses stage 286 stage. In second stage, they conducted in‐depth qualitative interviews (Study 2) gain further insights into validity experimental 1). results obtained from Study 1 revealed that “cognitive cues” “competence” significantly influence intentions among users. On other hand, “peripheral contribute positive experiences encountered during identified 69 thematic codes through exploratory research, providing deeper understanding variables. Theoretically, this extends ELM by introducing new dimensions human‐machine interactions heart digital transformation. From managerial standpoint, emphasizes significance adding “humanness” element development create more engaging customer actively.

Language: Английский

Citations

34

Chatbots in frontline services and customer experience: An anthropomorphism perspective DOI Creative Commons
Mai Nguyen, Lars‐Erik Casper Ferm, Sara Quach

et al.

Psychology and Marketing, Journal Year: 2023, Volume and Issue: 40(11), P. 2201 - 2225

Published: Aug. 9, 2023

Abstract This study measures the effects of chatbot anthropomorphic language on customers' perception competence and authenticity customer engagement while taking into consideration moderating roles humanlike appearance brand credibility. We conducted two experimental studies to examine conceptual framework. Study 1 tests effect a chatbot's relationship between chatbots' engagement. 2 credibility The findings confirm that interaction via use avatars language, such as using emojis, in conversations with customers influences engagement, this is mediated by perceived authenticity. Further, positive competence, subsequently only significant when was low (vs. high). offers insights provides suggestions how devise efficient strategies for engaging chatbots.

Language: Английский

Citations

32

Chatbot or human? The impact of online customer service on consumers' purchase intentions DOI
Shili Chen, Xiaolin Li, Kecheng Liu

et al.

Psychology and Marketing, Journal Year: 2023, Volume and Issue: 40(11), P. 2186 - 2200

Published: June 27, 2023

Abstract Artificial intelligence (AI) chatbots and human employees have emerged as the dominant forms of online customer service. However, existing research rarely connects service differences between them in terms product type, ignoring interactivity two. This study reveals effect matching type (AI chatbot vs. human) to (search experience) on consumers' purchase intentions through four experiments, revealing psychological mechanism boundary condition for existence this effect. It shows that (1) match positively affects intentions; (2) is mediated by processing fluency perceived quality; (3) works only when demand certainty low. These findings enrich theoretical service, provide marketing insights companies improve adoption AI employees.

Language: Английский

Citations

27

Anthropomorphism and consumer behaviour: A SPAR‐4‐SLR protocol compliant hybrid review DOI
Fateh Mohd Khan, Mohammad Anas, S.M. Fatah Uddin

et al.

International Journal of Consumer Studies, Journal Year: 2023, Volume and Issue: 48(1)

Published: Sept. 27, 2023

Abstract The notion of ‘anthropomorphism’ has been a subject intrigue for transdisciplinary academics and scholars the longest time, as origin this concept dates back to BCE (Before Common Era). Over past few decades, anthropomorphism literature burgeoning in marketing discipline its subfields (branding, advertising, consumer behaviour, etc.). This relatively novel stream adopts offers fascinating insights into consumers their choices, intentions. Although there have several qualitative review‐based assessments within field, none informed by quantitative tools or through framework‐based approach. Our hybrid variant systematic review fills gap using bibliometric techniques (performance analysis, co‐authorship analysis countries authors, co‐word keywords) Theories‐Context‐Characteristics‐Methods (TCCM) framework show evolution , trends intellectual structure behaviour research. We depict evolving trajectory over time sample 432 peer‐reviewed journal articles 27,671 secondary references (between 2005 2023) on behaviour. Significant results include identifying describing most influential articles, journals countries, different research streams, development, future directions. also present six knowledge clusters delineating field. An additional section depicting theories employed, characteristics explored, contexts examined, methods utilized domain presented. Furthermore, we used TCCM orchestrate possible trajectories By doing this, offer practitioners comprehension advancements comprehensive road map

Language: Английский

Citations

24

Expectations and beyond: The nexus of AI instrumentality and brand credibility in voice assistant retention using extended expectation‐confirmation model DOI Creative Commons
Ezlika M. Ghazali, Dilip S. Mutum,

Na Kai Lun

et al.

Journal of Consumer Behaviour, Journal Year: 2023, Volume and Issue: 23(2), P. 655 - 675

Published: Aug. 7, 2023

Abstract AI‐based voice assistants (AIVA) are capable of interpreting human speech and responding with useful information, aiding tasks, controlling other devices. The usage these AIVAs has grown significantly worldwide. Despite this growth, studies on user behavior related to continued intention effects the long‐term commercial sustainability brands, remain low. What is less understood potential AI instrumentality attributes brand credibility components in provoking shifts post‐use AIVAs. This study proposes a model which expands Expectation‐Confirmation Model for continuance AIVAs, by integrating user's technology traits, credibility. To verify research hypotheses, employed partial least square—structural equation modelling, based 281 validated responses survey. highlights significance Optimism, Innovativeness, Discomfort post‐adoption confirmation. Higher confirmation strongly associated perceived intelligence, anthropomorphism, information quality, system quality. Anthropomorphism quality key factors expertise, while anthropomorphism significant trustworthiness. confirms that expertise trustworthiness lead satisfaction use intention. Understanding antecedents extends existing literature provides valuable insights academics practitioners alike. Some implications researchers managers discussed.

Language: Английский

Citations

23

Evaluating the Experience of LGBTQ+ People Using Large Language Model Based Chatbots for Mental Health Support DOI Open Access
Zilin Ma, Yiyang Mei, Yinru Long

et al.

Published: May 11, 2024

LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) meet their mental health needs. However, little research has explored whether these can adequately and safely provide tailored support for this demographic. We interviewed 18 13 non-LGBTQ+ participants about experiences with LLM-based relied on support, likely due an absence of in real life. Notably, while LLMs offer prompt they frequently fall short grasping the nuances LGBTQ-specific challenges. Although fine-tuning address needs be a step right direction, it isn't panacea. The deeper issue is entrenched societal discrimination. Consequently, we call future researchers designers look beyond mere technical refinements advocate holistic strategies that confront counteract biases burdening community.

Language: Английский

Citations

15