Journal of Consumer Psychology, Journal Year: 2025, Volume and Issue: unknown
Published: March 16, 2025
Language: Английский
Journal of Consumer Psychology, Journal Year: 2025, Volume and Issue: unknown
Published: March 16, 2025
Language: Английский
Proceedings of the National Academy of Sciences, Journal Year: 2024, Volume and Issue: 121(34)
Published: Aug. 12, 2024
The social and behavioral sciences have been increasingly using automated text analysis to measure psychological constructs in text. We explore whether GPT, the large-language model (LLM) underlying AI chatbot ChatGPT, can be used as a tool for several languages. Across 15 datasets ( n = 47,925 manually annotated tweets news headlines), we tested different versions of GPT (3.5 Turbo, 4, 4 Turbo) accurately detect (sentiment, discrete emotions, offensiveness, moral foundations) across 12 found that r 0.59 0.77) performed much better than English-language dictionary 0.20 0.30) at detecting judged by manual annotators. nearly well as, sometimes than, top-performing fine-tuned machine learning models. Moreover, GPT’s performance improved successive model, particularly lesser-spoken languages, became less expensive. Overall, may superior many existing methods analysis, since it achieves relatively high accuracy requires no training data, is easy use with simple prompts (e.g., “is this negative?”) little coding experience. provide sample code video tutorial analyzing application programming interface. argue other LLMs help democratize making advanced natural language processing capabilities more accessible, facilitate cross-linguistic research understudied
Language: Английский
Citations
50Information Processing & Management, Journal Year: 2024, Volume and Issue: 61(3), P. 103665 - 103665
Published: Feb. 8, 2024
Language: Английский
Citations
32PNAS Nexus, Journal Year: 2024, Volume and Issue: 3(7)
Published: June 28, 2024
The emergence of large language models (LLMs) has sparked considerable interest in their potential application psychological research, mainly as a model the human psyche or general text-analysis tool. However, trend using LLMs without sufficient attention to limitations and risks, which we rhetorically refer "GPTology", can be detrimental given easy access such ChatGPT. Beyond existing guidelines, investigate current limitations, ethical implications, specifically for show concrete impact various empirical studies. Our results highlight importance recognizing global diversity, cautioning against treating (especially zero-shot settings) universal solutions text analysis, developing transparent, open methods address LLMs' opaque nature reliable, reproducible, robust inference from AI-generated data. Acknowledging utility task automation, annotation, expand our understanding psychology, argue diversifying samples expanding psychology's methodological toolbox promote an inclusive, generalizable science, countering homogenization, over-reliance on LLMs.
Language: Английский
Citations
10Philosophy & Technology, Journal Year: 2025, Volume and Issue: 38(1)
Published: March 1, 2025
Language: Английский
Citations
1Journal of Consumer Psychology, Journal Year: 2025, Volume and Issue: unknown
Published: March 16, 2025
Language: Английский
Citations
1Nature Human Behaviour, Journal Year: 2024, Volume and Issue: 8(9), P. 1643 - 1655
Published: Sept. 20, 2024
Language: Английский
Citations
7Nature Human Behaviour, Journal Year: 2024, Volume and Issue: unknown
Published: Oct. 11, 2024
Language: Английский
Citations
7Journal of Organizational Behavior, Journal Year: 2025, Volume and Issue: unknown
Published: Jan. 17, 2025
ABSTRACT Teams that combine human intelligence with artificial (AI) have become indispensable for solving complex tasks in various decision‐making contexts modern organizations. However, the factors contribute to AI convergence, where team members align their decisions those of counterparts, still remain unclear. This study integrates signaling theory self‐determination investigate how specific signals—such as signal fit, optional advice, and set congruence—affect employees' convergence human–AI teams. Based on four experimental studies conducted facial recognition hiring approximately 1100 participants, findings highlight significant positive impact congruent signals from both convergence. Moreover, providing an option employees solicit advice also enhances convergence; when are chosen by rather than forced upon them, participants more likely accept advice. research advances knowledge teaming (1) expanding into context; (2) developing a deeper understanding its drivers teams; (3) actionable insights designing teams optimize high‐stakes, uncertain environments; (4) introducing innovative context teaming.
Language: Английский
Citations
0AI & Society, Journal Year: 2025, Volume and Issue: unknown
Published: March 13, 2025
Language: Английский
Citations
0Research on Social Work Practice, Journal Year: 2025, Volume and Issue: unknown
Published: March 21, 2025
Purpose: While social work case management faces ongoing challenges with practice inefficiency, the emergence of artificial intelligence (AI) presents an innovative solution. This systematic review examines how AI is applied in management. Method: A comprehensive search was conducted across databases for studies published after 2000. Empirical on AI-assisted were included review. Results : From 11,022 identified studies, eight met inclusion criteria and reviewed. The results indicated that most commonly used techniques machine learning natural language processing. They utilized decision-making procedures, client identification, intervention classification, risk prevention, service monitoring. Seven demonstrated effective outcomes. Discussion Though applications remain early development stages, this reveals growing interest promising potential benefits. These findings contribute to advancing a global scale.
Language: Английский
Citations
0