AI as a Research Proxy: Navigating the New Frontier of Social Science Inquiry through Language Models DOI Creative Commons
Antonina Rafikova, А. Н. Воронин

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 4, 2024

Abstract As artificial intelligence transforms the landscape of social science research, large language models (LLMs) like ChatGPT present both unprecedented opportunities and challenges. This study explores application as "surrogates" or computational substitutes for human participants in sociological socio-psychological research. By simulating responses to complex socio-cultural issues, we investigate how well can replicate attitudes toward immigration, gender stereotypes, LGB parenting attitudes. We utilized a general simulation model employing detailed demographic prompts generate synthetic participant responses, assessing their accuracy political biases. Our findings reveal consistent liberal bias outputs. The results demonstrate ChatGPT’s potential simulate diverse behaviors while highlighting limitations explanatory power susceptibility existing societal research underscores necessity critical evaluation AI-generated data contexts calls further refinement LLM methodologies.

Language: Английский

AI-Supported Participatory Workshops: Middle-Out Engagement for Crisis Events DOI Creative Commons
Martin Tomitsch, Joel Fredericks, Marius Hoggenmüeller

et al.

Urban Planning, Journal Year: 2025, Volume and Issue: 10

Published: Jan. 27, 2025

Considering the lived experience of communities is key when making decisions in complex scenarios, such as preparing for and responding to crisis events. The article reports on three participatory workshops, which assigned community representative roles workshop participants. Using role-playing a method, participants were given task collaborating decision relating speculative scenario. Across we collected data about simulating middle-out engagement approach role artificial intelligence (AI) enhancing collaboration, supporting decision-making, representing non-human actors. makes contributions planning design context UN Sustainable Development Goals. First, it presents insights use AI collaboration decision-making event situations. Second, discusses approaches bringing more-than-human considerations into design. Third, reflects value way simulate process, whereby actors from top bottom collaborate towards informed scenarios. Drawing findings critically challenges risks associated with using workshops collaborative decision-making.

Language: Английский

Citations

0

Beyond WEIRD: Can synthetic survey participants substitute for humans in global policy research? DOI Creative Commons

Pujen Shrestha,

Dario Krpan,

Fatima Koaik

et al.

Behavioral Science & Policy, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 8, 2025

Researchers are testing the feasibility of using artificial intelligence tools known as large language models to create synthetic research participants—artificial entities that respond surveys real humans would. Thus far, this has largely not been designed examine whether participants could mimic human answers policy-relevant or reflect views people from non-WEIRD (Western, educated, industrialized, rich, and democratic) nations. Addressing these gaps in one study, we have compared participants’ responses survey questions three domains: sustainability, financial literacy, female participation labor force. Participants were drawn United States well two nations previously included studies respondents: Kingdom Saudi Arabia Arab Emirates. We found for all nations, created by GPT-4, a form model, on average produced reasonably similar those their counterparts. Nevertheless, observed some differences between American participants: For latter, correlations full set tended be weaker. In addition, although common tendency countries show more positive less negative bias (that is, progressive financially literate relative counterparts), trend was pronounced participants. discuss main policy implications our findings offer practical recommendations improving use research.

Language: Английский

Citations

0

ChatGPT and academic work: new psychological phenomena DOI Creative Commons
Joost de Winter, Peter A. Hancock, Yke Bauke Eisma

et al.

AI & Society, Journal Year: 2025, Volume and Issue: unknown

Published: March 17, 2025

Language: Английский

Citations

0

A whole new world, a new fantastic point of view: Charting unexplored territories in consumer research with generative artificial intelligence DOI
Kiwoong Yoo, Michael Haenlein, Kelly Hewett

et al.

Journal of the Academy of Marketing Science, Journal Year: 2025, Volume and Issue: unknown

Published: April 11, 2025

Language: Английский

Citations

0

Does episodic future thinking differ when considering the future of the natural environment? An experimental test DOI Open Access
Taciano L. Milfont, Demi Claire Cuttance-Dunne, Joanne I. Ellis

et al.

Published: June 25, 2024

The cognitive ability to mental time travel allows people project themselves backward and forward in time. Research has shown that anticipating envisioning possible future events, imagining ourselves experiencing those many implications for health, finance, environmental protection. While public policies these domains consider events may occur decades ahead the future, current evidence suggests we typically a much shorter temporal distance when spontaneously thinking about future. Here examined whether this typical short would vary if individuals were primed think of natural environment between-subjects experiment with online respondents (212 MTurkers). We compared responses from randomly asked general (control condition) (main experimental or work (contrasting condition). Results confirmed very term into between 1 5 years. Supporting our intuition, results showed statistically more likely longer (10 years more) other conditions. discuss findings time‐scale mismatches policy making.

Language: Английский

Citations

0

Exploring the Potential of Large Language Models for Text-Based Personality Prediction DOI

Maria Molchanova

Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 291 - 301

Published: Jan. 1, 2024

Language: Английский

Citations

0

Testing new versions of ChatGPT in terms of physiology and electrophysiology of hearing: improved accuracy but not consistency DOI Creative Commons
W. Wiktor Jędrzejczak, Henryk Skarżyńśki, Krzysztof Kochanek

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 8, 2024

Abstract Introduction ChatGPT has revolutionized many aspects of modern life, including scientific ones. Since its introduction, new versions have been introduced and advertised as having better performance. But is this true? This study aimed to assess the accuracy consistency six (3.5, 4, 4o mini, 4o, 4o1 preview). Of interest was variability responses given asking same question multiple times. Methods We evaluated 6 based on their 30 single-answer, multiple-choice exam questions from a 1-year course objective methods testing hearing. The were posed 10 times each version across two days (5 day). in terms response key. To evaluate (repeatability) over time, percent agreement Cohen’s Kappa calculated. Results overall increased with version, starting around 53% for 3.5 rising 86% preview. greatest improvement repeatability came introduction 4o. Repeatability progressively rose newer releases exception mini. While current top preview similar faster had significantly lower than older Conclusion Newer generally show accuracy, but not repeatability. probably main limitation professional applications. Users must be especially careful

Language: Английский

Citations

0

Vector Personas: How UX Researchers Can Use AI to Bring New Dimension to Traditional Persona Development DOI Open Access
Claire Lauer,

D.W. Storey,

Richard Mark Soley

et al.

Published: Oct. 20, 2024

Personas, representing user groups with specific needs and behaviors, are crucial in experience (UX) design. High-quality personas based on robust data; otherwise, they may cause more problems than solve. Generative AI like ChatGPT shows potential enhancing persona development by integrating diverse data sources. However, its reliability is questionable due to lack of contextual understanding real-world experience. This paper introduces the concept "Vector Personas," a method combining traditional qualitative techniques develop personas. Our UX team tested this approach comparing created through methods those generated assistance, using structured comparison triangulation from feedback. study highlights benefits limitations development. While can enhance efficiency offer new insights, it cannot replace depth provided human research. The Vector Persona process optimizes balancing time investment information quality, outputs insights create comprehensive accurate representations.

Language: Английский

Citations

0

A Double-Edged Sword: Physics Educators’ Perspectives on Utilizing ChatGPT and Its Future in Classrooms DOI
Hyewon Jang, Hyukjoon Choi

Journal of Science Education and Technology, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 26, 2024

Language: Английский

Citations

0

AI as a Research Proxy: Navigating the New Frontier of Social Science Inquiry through Language Models DOI Creative Commons
Antonina Rafikova, А. Н. Воронин

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 4, 2024

Abstract As artificial intelligence transforms the landscape of social science research, large language models (LLMs) like ChatGPT present both unprecedented opportunities and challenges. This study explores application as "surrogates" or computational substitutes for human participants in sociological socio-psychological research. By simulating responses to complex socio-cultural issues, we investigate how well can replicate attitudes toward immigration, gender stereotypes, LGB parenting attitudes. We utilized a general simulation model employing detailed demographic prompts generate synthetic participant responses, assessing their accuracy political biases. Our findings reveal consistent liberal bias outputs. The results demonstrate ChatGPT’s potential simulate diverse behaviors while highlighting limitations explanatory power susceptibility existing societal research underscores necessity critical evaluation AI-generated data contexts calls further refinement LLM methodologies.

Language: Английский

Citations

0