2021 IEEE International Conference on Big Data (Big Data), Journal Year: 2024, Volume and Issue: unknown, P. 3858 - 3864
Published: Dec. 15, 2024
Language: Английский
2021 IEEE International Conference on Big Data (Big Data), Journal Year: 2024, Volume and Issue: unknown, P. 3858 - 3864
Published: Dec. 15, 2024
Language: Английский
Sociological Methods & Research, Journal Year: 2025, Volume and Issue: unknown
Published: April 22, 2025
Large language models (LLMs) provide cost-effective but possibly inaccurate predictions of human behavior. Despite growing evidence that predicted and observed behavior are often not interchangeable , there is limited guidance on using LLMs to obtain valid estimates causal effects other parameters. We argue LLM should be treated as potentially informative observations, while subjects serve a gold standard in mixed design . This paradigm preserves validity offers more precise at lower cost than experiments relying exclusively subjects. demonstrate—and extend—prediction-powered inference (PPI), method combines observations. define the PPI correlation measure interchangeability derive effective sample size for PPI. also introduce power analysis optimally choose between costly less cheap Mixed designs could enhance scientific productivity reduce inequality access evidence.
Language: Английский
Citations
0Cognition, Journal Year: 2024, Volume and Issue: 253, P. 105936 - 105936
Published: Aug. 31, 2024
Crossmodal correspondences, the tendency for a sensory feature / attribute in one modality (either physically present or merely imagined), to be associated with another modality, have been studied extensively, revealing consistent patterns, such as sweet tastes being pink colours and round shapes across languages. The research explores whether correspondences are captured by ChatGPT, large language model developed OpenAI. Across twelve studies, this investigates colour/shapes-taste crossmodal ChatGPT-3.5 -4o, focusing on associations between shapes/colours five basic three languages (English, Japanese, Spanish). Studies 1A-F examined taste-shape associations, using prompts assess ChatGPT's association of angular tastes. results indicated significant, consistent, shape taste, with, example, strongly sweet/umami bitter/salty/sour magnitude shape-taste matching appears greater ChatGPT-4o than ChatGPT-3.5, ChatGPT prompted English Spanish Japanese. 2A-F focused colour-taste eleven that ChatGPT-4o, but not generally replicates patterns previously observed human participants. Specifically, associates pink, sour yellow, salty white/blue, bitter black, umami red However, magnitude/similarity shape/colour-taste more pronounced (i.e., having little variance, mean difference), which does adequately reflect subtle nuances typically seen correspondences. These findings suggest captures language- GPT version-specific variations, albeit some differences when compared previous studies involving contribute valuable knowledge field explore possibility generative AI resembles perceptual systems cognition languages, provide insight into development evolution capture
Language: Английский
Citations
3Psychology and Marketing, Journal Year: 2024, Volume and Issue: unknown
Published: Sept. 13, 2024
Abstract The fashion industry can benefit from generative AI because the AI‐assisted design process has potential to be more efficient and cost‐ time‐effective. Under mind perception theory, this study examines how consumers evaluate AI's experiential intentional abilities respond AI‐designed versus human‐designed products. results of three online experiments indicate that products are generally favorably evaluated than ones. Such preference was mainly people ascribe a better capacity act plan humans AI, which makes them believe have expertise. for human over found regardless one's tendency perceive threats. receptivity is increased highly functional (vs. self‐expressive) products, but such moderating effect product type may not hold in intra‐product comparisons. Incorporating elements into when introducing alleviate negative responses, effectiveness vary depending on levels elements. This contributes literature by finding perceptual superiority suggesting changing bias toward design.
Language: Английский
Citations
2Annals of Tourism Research, Journal Year: 2024, Volume and Issue: 108, P. 103819 - 103819
Published: Aug. 9, 2024
• Large language models can support stimuli evaluation. Synthetic data suits early research such as pretests; avoid it for main studies. Prompt-tuning and fine-tuning improve large model's responses.
Language: Английский
Citations
1Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown
Published: Dec. 4, 2024
Language: Английский
Citations
0Journal of Marketing Management, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 20
Published: Dec. 4, 2024
This paper aims to provide a comprehensive understanding of the extent which generative AI (GAI) tools and Large Language Models (LLMs) can design new creative meaningful products in luxury industry. To this end, research involves three qualitative studies understand cognitive emotional response towards outcome. Results reveal that consumers perceived GAI-designed reflect reinforce essence symbolic values brands, their perception is influenced by knowledge GAI authorship product. Finally, our findings open possible scenarios based on high/low creativity employment for product vs. quality manufacturing materials product/brand (namely matrix).
Language: Английский
Citations
0Published: Jan. 29, 2024
The rise of large language models (LLMs) that generate human-like text has sparked debates over their potential to replace human participants in behavioral and cognitive research. We critically evaluate this replacement perspective appraise the fundamental utility psychology social science. Through a five-dimension framework—characterization, representation, interpretation, implication, utility—we identify six fallacies undermine perspective: (1) equating token prediction with intelligence, (2) assuming LLMs represent average human, (3) interpreting alignment as explanation, (4) anthropomorphizing AI, (5) essentializing identities, (6) purporting primary tools directly reveal mind. Rather than replacement, evidence arguments are consistent simulation perspective, where offer new paradigm simulate roles model processes. highlight limitations considerations about internal, external, construct, statistical validity, providing methodological guidelines for effective integration into psychological research—with focus on selection, prompt design, ethical considerations. This reframes role science, serving linguistic simulators shed light similarities differences between machine intelligence cognition thoughts.
Language: Английский
Citations
0British Journal of Industrial Relations, Journal Year: 2024, Volume and Issue: 63(1), P. 180 - 208
Published: Aug. 14, 2024
Abstract Despite initial research about the biases and perceptions of large language models (LLMs), we lack evidence on how LLMs evaluate occupations, especially in comparison to human evaluators. In this paper, present a systematic occupational evaluations by GPT‐4 with those from an in‐depth, high‐quality recent respondents survey UK. Covering full ISCO‐08 landscape, 580 occupations two distinct metrics (prestige social value), our findings indicate that scores are highly correlated across all major groups. At same time, substantially under‐ or overestimates prestige value many particularly for emerging digital stigmatized illicit occupations. Our analyses show both potential risk using LLM‐generated data sociological research. We also discuss policy implications integration LLM tools into world work.
Language: Английский
Citations
0Revista Inteligência Competitiva, Journal Year: 2024, Volume and Issue: 15, P. e0469 - e0469
Published: Oct. 20, 2024
Rapid advancements in artificial intelligence (AI) have significantly transformed how individuals and organizations engage with their work, particularly research academia. Universities are urgently developing protocols for student use of large language models (LLMs) coursework, while peer-reviewed journals conferences remain divided on the necessity reporting AI assistance manuscript development. This paper examines diverse perspectives LLM usage scholarly research, ranging from concerns about contamination to recognition its potential benefits. Building existing literature, we explore guidelines competitive (CI) researchers effectively utilize GPT models, such as ChatGPT4, Scholar GPT, Consensus throughout cycle. These developed by OpenAI, employ generative produce new content based user prompts, output quality dependent input specificity. Despite recognized literature reviews, qualitative analysis, data full capabilities underutilized. article provides a comprehensive guide business integrate planning, structuring, executing research. Specific guidance is provided focused intelligence.
Language: Английский
Citations
02021 IEEE International Conference on Big Data (Big Data), Journal Year: 2024, Volume and Issue: unknown, P. 3858 - 3864
Published: Dec. 15, 2024
Language: Английский
Citations
0