Unveiling College Student Preferences: Integrating Numerical and Factor Analysis in Understanding Choices for Mathematics Majors DOI

Fitri Ali Rahmayani,

Sulaiman Muritala Hambali,

Amin Abrishami Moghadam

и другие.

Interval Indonesian Journal of Mathematical Education, Год журнала: 2023, Номер 1(2), С. 83 - 98

Опубликована: Дек. 26, 2023

Purpose of the study: This study aims to understand factors that influence students in choosing a mathematics major using factor analysis method. Methodology: Data were collected through structured interviews from 150 at two different universities stratified random sampling techniques. Analysis was performed Principal Component (PCA) and Varimax rotation identify main dimensions student preferences. Numerical helped group variables into relevant based on loading values Main Findings: Factors Mathematics Major consist 19 which are grouped 5 factors, namely: first is privileges facilities with an eigenvalue 4.088%, second lecture building social 2.431%, third promotion 1.743%, fourth job 1.351%, fifth comfort 1.148%. Novelty/Originality this These findings provide new insights for educational institutions designing effective promotional strategies developing curricula increase attractiveness majors. The novelty lies application map students' specific reasons, has rarely been done before context higher education.

Язык: Английский

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

и другие.

Trends in Cognitive Sciences, Год журнала: 2024, Номер 28(6), С. 517 - 540

Опубликована: Март 19, 2024

Язык: Английский

Процитировано

115

Large Language Models and the Reverse Turing Test DOI Open Access
Terrence J. Sejnowski

Neural Computation, Год журнала: 2023, Номер 35(3), С. 309 - 342

Опубликована: Фев. 6, 2023

Large language models (LLMs) have been transformative. They are pretrained foundational that self-supervised and can be adapted with fine-tuning to a wide range of natural tasks, each which previously would required separate network model. This is one step closer the extraordinary versatility human language. GPT-3 and, more recently, LaMDA, both them LLMs, carry on dialogs humans many topics after minimal priming few examples. However, there has reactions debate whether these LLMs understand what they saying or exhibit signs intelligence. high variance exhibited in three interviews reaching wildly different conclusions. A new possibility was uncovered could explain this divergence. What appears intelligence may fact mirror reflects interviewer, remarkable twist considered reverse Turing test. If so, then by studying interviews, we learning about beliefs interviewer than LLMs. As become capable, transform way interact machines how other. Increasingly, being coupled sensorimotor devices. talk talk, but walk walk? road map for achieving artificial general autonomy outlined seven major improvements inspired brain systems turn used uncover insights into function.

Язык: Английский

Процитировано

108

Symbols and grounding in large language models DOI Creative Commons
Ellie Pavlick

Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences, Год журнала: 2023, Номер 381(2251)

Опубликована: Июнь 4, 2023

Large language models (LLMs) are one of the most impressive achievements artificial intelligence in recent years. However, their relevance to study more broadly remains unclear. This article considers potential LLMs serve as understanding humans. While debate on this question typically centres around models’ performance challenging tasks, argues that answer depends underlying competence, and thus focus should be empirical work which seeks characterize representations processing algorithms underlie model behaviour. From perspective, offers counterarguments two commonly cited reasons why cannot plausible humans: lack symbolic structure grounding. For each, a case is made trends undermine common assumptions about LLMs, it premature draw conclusions LLMs’ ability (or thereof) offer insights human representation understanding. part discussion meeting issue ‘Cognitive intelligence’.

Язык: Английский

Процитировано

50

Using large language models to generate silicon samples in consumer and marketing research: Challenges, opportunities, and guidelines DOI Creative Commons
Marko Sarstedt, Susanne Adler,

Lea Rau

и другие.

Psychology and Marketing, Год журнала: 2024, Номер 41(6), С. 1254 - 1270

Опубликована: Фев. 10, 2024

Abstract Should consumer researchers employ silicon samples and artificially generated data based on large language models, such as GPT, to mimic human respondents' behavior? In this paper, we review recent research that has compared result patterns from samples, finding results vary considerably across different domains. Based these results, present specific recommendations for sample use in marketing research. We argue hold particular promise upstream parts of the process qualitative pretesting pilot studies, where collect external information safeguard follow‐up design choices. also provide a critical assessment using main studies. Finally, discuss ethical issues future avenues.

Язык: Английский

Процитировано

19

On the creativity of large language models DOI Creative Commons
Giorgio Franceschelli, Mirco Musolesi

AI & Society, Год журнала: 2024, Номер unknown

Опубликована: Ноя. 28, 2024

Abstract Large language models (LLMs) are revolutionizing several areas of Artificial Intelligence. One the most remarkable applications is creative writing, e.g., poetry or storytelling: generated outputs often astonishing quality. However, a natural question arises: can LLMs be really considered creative? In this article, we first analyze development under lens creativity theories, investigating key open questions and challenges. particular, focus our discussion on dimensions value, novelty, surprise as proposed by Margaret Boden in her work. Then, consider different classic perspectives, namely product, process, press, person. We discuss set “easy” “hard” problems machine creativity, presenting them relation to LLMs. Finally, examine societal impact these technologies with particular industries, analyzing opportunities offered, challenges arising from them, potential associated risks, both legal ethical points view.

Язык: Английский

Процитировано

17

(Ir)rationality and cognitive biases in large language models DOI Creative Commons
Olivia Macmillan-Scott, Mirco Musolesi

Royal Society Open Science, Год журнала: 2024, Номер 11(6)

Опубликована: Июнь 1, 2024

Do large language models (LLMs) display rational reasoning? LLMs have been shown to contain human biases due the data they trained on; whether this is reflected in reasoning remains less clear. In paper, we answer question by evaluating seven using tasks from cognitive psychology literature. We find that, like humans, irrationality these tasks. However, way displayed does not reflect that humans. When incorrect answers are given tasks, often ways differ human-like biases. On top of this, reveal an additional layer significant inconsistency responses. Aside experimental results, paper seeks make a methodological contribution showing how can assess and compare different capabilities types models, case with respect reasoning.

Язык: Английский

Процитировано

9

Language models and psychological sciences DOI Creative Commons
Giuseppe Sartori, Graziella Orrù

Frontiers in Psychology, Год журнала: 2023, Номер 14

Опубликована: Окт. 20, 2023

Large language models (LLMs) are demonstrating impressive performance on many reasoning and problem-solving tasks from cognitive psychology. When tested, their accuracy is often par with average neurotypical adults, challenging long-standing critiques of associative models. Here we analyse recent findings at the intersection LLMs science. discuss how modern resurrect associationist principles, abilities like long-distance associations enabling complex reasoning. While limitations remain in areas causal cognition planning, phenomena emergence suggest room for growth. Providing examples increasing dimensions network methods that further improve LLM abilities, mirroring facilitation effects human cognition. Analysis errors provides insight into biases. Overall, argue represent a promising development modelling, new explorations mechanisms underlying intelligence an point view. Carefully evaluating tools psychology will understand building blocks mind.

Язык: Английский

Процитировано

13

Large language models are able to downplay their cognitive abilities to fit the persona they simulate DOI Creative Commons
Jiří Milička, Anna Marklová,

Klára VanSlambrouck

и другие.

PLoS ONE, Год журнала: 2024, Номер 19(3), С. e0298522 - e0298522

Опубликована: Март 13, 2024

This study explores the capabilities of large language models to replicate behavior individuals with underdeveloped cognitive and skills. Specifically, we investigate whether these can simulate child-like development while solving false-belief tasks, namely, change-of-location unexpected-content tasks. GPT-3.5-turbo GPT-4 by OpenAI were prompted children (N = 1296) aged one six years. simulation was instantiated through three types prompts: plain zero-shot, chain-of-thoughts, primed-by-corpus. We evaluated correctness responses assess models’ capacity mimic skills simulated children. Both displayed a pattern increasing in their rising complexity. That is correspondence gradual enhancement linguistic abilities during child development, which described vast body research literature on development. generally exhibited closer alignment developmental curve observed ‘real’ However, it hyper-accuracy under certain conditions, notably primed-by-corpus prompt type. Task type, choice model influenced patterns, temperature gender parent did not consistently impact results. conducted analyses complexity, examining utterance length Kolmogorov These revealed increase complexity corresponding age children, regardless other variables. findings show that are capable downplaying achieve faithful personas.

Язык: Английский

Процитировано

5

Attention heads of large language models DOI Creative Commons

Zifan Zheng,

Yezhaohui Wang,

Yuxin Huang

и другие.

Patterns, Год журнала: 2025, Номер 6(2), С. 101176 - 101176

Опубликована: Фев. 1, 2025

Large language models (LLMs) have demonstrated performance approaching human levels in tasks such as long-text comprehension and mathematical reasoning, but they remain black-box systems. Understanding the reasoning bottlenecks of LLMs remains a critical challenge, these limitations are deeply tied to their internal architecture. Attention heads play pivotal role thought share similarities with brain functions. In this review, we explore roles mechanisms attention help demystify processes LLMs. We first introduce four-stage framework inspired by process. Using framework, review existing research identify categorize functions specific heads. Additionally, analyze experimental methodologies used discover special further summarize relevant evaluation methods benchmarks. Finally, discuss current propose several potential future directions.

Язык: Английский

Процитировано

0

Multimodal Large Language Model Passes Specialty Board Examination and Surpasses Human Test-Taker Scores: A Comparative Analysis Examining the Stepwise Impact of Model Prompting Strategies on Performance DOI Creative Commons
Jamil S. Samaan, Samuel Margolis, Nitin Srinivasan

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2024, Номер unknown

Опубликована: Июль 29, 2024

ABSTRACT Background Large language models (LLMs) have shown promise in answering medical licensing examination-style questions. However, there is limited research on the performance of multimodal LLMs subspecialty examinations. Our study benchmarks LLM’s enhanced by model prompting strategies gastroenterology subspeciality questions and examines how these incrementally improve overall performance. Methods We used 2022 American College Gastroenterology (ACG) self-assessment examination (N=300). This test typically completed fellows established gastroenterologists preparing for board examination. employed a sequential implementation strategies: prompt engineering, retrieval augmented generation (RAG), five-shot learning, an LLM-powered answer validation revision (AVRM). GPT-4 Gemini Pro were tested. Results Implementing all improved score from 60.3% to 80.7% Pro’s 48.0% 54.3%. GPT-4’s surpassed 70% passing threshold 75% average human test-taker scores unlike Pro. Stratification difficulty showed accuracy both mirrored that examinees, demonstrating higher as increased. The addition AVRM prompt, RAG 5-shot increased 4.4%. incremental non-image (57.2% 80.4%) image-based (63.0% 80.9%) GPT-4, but not Conclusions results underscore value improving LLM subspecialty-level exam also present novel reviewer context medicine which further when combined with other strategies. findings highlight potential future role LLMs, particularly multiple strategies, clinical decision support systems care healthcare providers.

Язык: Английский

Процитировано

2