Exploring the Sensitivity of LLMs’ Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters DOI Creative Commons
Manikanta Loya,

Divya Sinha,

Richard Futrell

и другие.

Опубликована: Янв. 1, 2023

The advancement of Large Language Models (LLMs) has led to their widespread use across a broad spectrum tasks, including decision-making. Prior studies have compared the decision-making abilities LLMs with those humans from psychological perspective. However, these not always properly accounted for sensitivity LLMs' behavior hyperparameters and variations in prompt. In this study, we examine performance on Horizon task studied by Binz Schulz (2023), analyzing how respond prompts hyperparameters. By experimenting three OpenAI language models possessing different capabilities, observe that fluctuate based input temperature settings. Contrary previous findings, display human-like exploration–exploitation tradeoff after simple adjustments

Язык: Английский

Can I trust my AI friend? The role of emotions, feelings of friendship and trust for consumers' information-sharing behavior toward AI DOI Creative Commons
Corina Pelău, Dan‐Cristian Dabija, Mihaela Stanescu

и другие.

Oeconomia Copernicana, Год журнала: 2024, Номер 15(2), С. 407 - 433

Опубликована: Июнь 30, 2024

Research background: AI devices and robots play an increasingly important role in consumers’ everyday life, by accompanying the consumer all day long. This presence has several utilitarian social benefits, but at same time optimal functioning of requires personal information from consumer. Purpose article: Starting premise that people share more with friends, we have tested empirically whether emotional behavior can evoke emotions relationship between consumers their devices, leading to a higher self-disclosing behavior. Methods: To validate proposed hypotheses, three mediation models were using structural equation modelling Smart-PLS 3.3.3, based on data collected help online survey. Findings & value added: We prove AI’s increase trust, it feelings friendship determine perceived control over shared private information, thus lower threats regarding vulnerability exposure related sharing data. These results implications for designing consumer-AI interactions.

Язык: Английский

Процитировано

7

Digital Companionship or Psychological Risk? The Role of AI Characters in Shaping Youth Mental Health DOI
Ritesh Bhat, Suhas Kowshik,

S. Suresh

и другие.

Asian Journal of Psychiatry, Год журнала: 2025, Номер 104, С. 104356 - 104356

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

1

Trust in AI-driven chatbots: A systematic review DOI
Sheryl Wei Ting Ng, Renwen Zhang

Telematics and Informatics, Год журнала: 2025, Номер 97, С. 102240 - 102240

Опубликована: Янв. 9, 2025

Язык: Английский

Процитировано

1

Determinants of the Usage of ChatGPT in the Tourism and Hospitality Industry: A Model Proposal from the Technology Acceptance Perspective DOI Open Access
Alptekin Sökmen, Hasan Evrim Arıcı, Gürkan Çalışkan

и другие.

Journal of Tourism and Gastronomy Studies, Год журнала: 2024, Номер unknown

Опубликована: Март 30, 2024

ChatGPT is a generative artificial intelligence technology. It technology that becoming more widely used. This research aimed to identify the determinants of usage in tourism and hospitality industry. For this purpose, systematic literature review was conducted, six were identified. These are experience quality, satisfaction, interaction, ethics, reliability, design features. determined variables affect behavioral intention regarding ChatGPT. Within scope findings, model for using proposed from perspective Technology Acceptance Model. Determinants use industry have been Relevant articles subjected thematic analysis themes determined. These; quality experience, reliability In research, accommodation sector identified recommendations made.

Язык: Английский

Процитировано

6

Exploring the Sensitivity of LLMs’ Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters DOI Creative Commons
Manikanta Loya,

Divya Sinha,

Richard Futrell

и другие.

Опубликована: Янв. 1, 2023

The advancement of Large Language Models (LLMs) has led to their widespread use across a broad spectrum tasks, including decision-making. Prior studies have compared the decision-making abilities LLMs with those humans from psychological perspective. However, these not always properly accounted for sensitivity LLMs' behavior hyperparameters and variations in prompt. In this study, we examine performance on Horizon task studied by Binz Schulz (2023), analyzing how respond prompts hyperparameters. By experimenting three OpenAI language models possessing different capabilities, observe that fluctuate based input temperature settings. Contrary previous findings, display human-like exploration–exploitation tradeoff after simple adjustments

Язык: Английский

Процитировано

12