Biolinguistics end-of-year notice 2023 DOI Creative Commons
Kleanthes Κ. Grohmann, Maria Kambanaros, Evelina Leivada

и другие.

Biolinguistics, Год журнала: 2023, Номер 17

Опубликована: Дек. 22, 2023

Biolinguistics End-Of-Year Notice 2023 Authors Kleanthes K. Grohmann Department of English Studies, University Cyprus, Nicosia, Cyprus Maria Kambanaros Rehabilitation Sciences, Technology, Limassol, Evelina Leivada Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Spain; Philology, Universitat Autònoma de Spain Bridget Samuels Center Craniofacial Molecular Biology, Southern California, Los Angeles, CA, USA Patrick C. Trettenbrein Neuropsychology, Max Planck Institute Human Cognitive Brain Leipzig, Germany; Experimental Sign Language Laboratory (SignLab), German Göttingen, Germany PDF HTML XML Article info Impact Citations How to Cite License Published at 22. December https://doi.org/10.5964/bioling.13537 Issue: Vol. 17 (2023) Section: Forum Share: Z Grohmann, K., Kambanaros, M., Leivada, E., Samuels, B., & Trettenbrein, P. (2023). end-of-year notice 2023. Biolinguistics, 17, e13537. This work is licensed under a Creative Commons Attribution (CC BY) 4.0 International License. PlumX Dimensions Views: Total Abstract 5 1 3 0

Язык: Английский

Prompting is not a substitute for probability measurements in large language models DOI Creative Commons

Jennifer Hu,

Roger Lévy

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Год журнала: 2023, Номер unknown

Опубликована: Янв. 1, 2023

Prompting is now a dominant method for evaluating the linguistic knowledge of large language models (LLMs). While other methods directly read out models' probability distributions over strings, prompting requires to access this internal information by processing input, thereby implicitly testing new type emergent ability: metalinguistic judgment. In study, we compare and direct measurements as ways measuring knowledge. Broadly, find that LLMs' judgments are inferior quantities derived from representations. Furthermore, consistency gets worse prompt query diverges next-word probabilities. Our findings suggest negative results relying on prompts cannot be taken conclusive evidence an LLM lacks particular generalization. also highlight value lost with move closed APIs where limited.

Язык: Английский

Процитировано

20

LLM Cognitive Judgements Differ from Human DOI
Sotiris Lamprinidis

Опубликована: Янв. 1, 2024

Large Language Models (LLMs) have lately been on the spotlight of researchers, businesses, and consumers alike. While linguistic capabilities such models studied extensively, there is growing interest in investigating them as cognitive subjects. In present work, I examine GPT-3 ChatGPT an limited data inductive reasoning task from science literature. The results suggest that these models' judgements are not human like.

Язык: Английский

Процитировано

3

Large language models are better than theoretical linguists at theoretical linguistics DOI
Ben Ambridge, Liam P. Blything

Theoretical Linguistics, Год журнала: 2024, Номер 50(1-2), С. 33 - 48

Опубликована: Июнь 1, 2024

Abstract Large language models are better than theoretical linguists at linguistics, least in the domain of verb argument structure; explaining why (for example), we can say both The ball rolled and Someone , but not man laughed * . Verbal accounts this phenomenon either do make precise quantitative predictions all, or so only with help ancillary assumptions by-hand data processing. models, on other hand (taking text-davinci-002 as an predict human acceptability ratings for these types sentences correlations around r = 0.9, themselves constitute theories acquisition representation; that instantiate exemplar-, input- construction-based approaches, though very loosely. Indeed, large succeed where verbal (i.e., non-computational) linguistic fail, precisely because latter insist – service intuitive interpretability simple yet empirically inadequate (over)generalizations.

Язык: Английский

Процитировано

3

The Limitations of Large Language Models for Understanding Human Language and Cognition DOI Creative Commons
Christine Cuskley, Rebecca Woods,

Molly Flaherty

и другие.

Open Mind, Год журнала: 2024, Номер 8, С. 1058 - 1083

Опубликована: Янв. 1, 2024

Researchers have recently argued that the capabilities of Large Language Models (LLMs) can provide new insights into longstanding debates about role learning and/or innateness in development and evolution human language. Here, we argue on two grounds LLMs alone tell us very little language cognition terms acquisition evolution. First, any similarities between output are purely functional. Borrowing "four questions" framework from ethology,

Язык: Английский

Процитировано

3

A comparative investigation of compositional syntax and semantics in DALL·E and young children DOI Creative Commons
Elliot Murphy, Jill de Villiers,

Sofia Lucero Morales

и другие.

Social Sciences & Humanities Open, Год журнала: 2025, Номер 11, С. 101332 - 101332

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Acquiring constraints on filler-gap dependencies from structural collocations: Assessing a computational learning model of island-insensitivity in Norwegian DOI Creative Commons
Anastasia Kobzeva, Dave Kush

Language Acquisition, Год журнала: 2025, Номер unknown, С. 1 - 44

Опубликована: Март 13, 2025

Язык: Английский

Процитировано

0

Derivational morphology reveals analogical generalization in large language models DOI Creative Commons
Valentin Hofmann,

Leonie Weissweiler,

David R. Mortensen

и другие.

Proceedings of the National Academy of Sciences, Год журнала: 2025, Номер 122(19)

Опубликована: Май 9, 2025

What mechanisms underlie linguistic generalization in large language models (LLMs)? This question has attracted considerable attention, with most studies analyzing the extent to which skills of LLMs resemble rules. As yet, it is not known whether could equally well be explained as result analogy. A key shortcoming prior research its focus on regular phenomena, for rule-based and analogical approaches make same predictions. Here, we instead examine derivational morphology, specifically English adjective nominalization, displays notable variability. We introduce a method investigating LLMs: Focusing GPT-J, fit cognitive that instantiate learning LLM training data compare their predictions set nonce adjectives those LLM, allowing us draw direct conclusions regarding underlying mechanisms. expected, explain GPT-J nominalization patterns. However, variable patterns, model provides much better match. Furthermore, GPT-J’s behavior sensitive individual word frequencies, even forms, consistent an account but one. These findings refute hypothesis involves rules, suggesting analogy mechanism. Overall, our study suggests processes play bigger role than previously thought.

Язык: Английский

Процитировано

0

Large Language Models, Parrots, and Children DOI Open Access
Marietta Kesting

Cultural inquiry, Год журнала: 2025, Номер unknown, С. 217 - 237

Опубликована: Янв. 1, 2025

This essay discusses some aspects of large language models (LLMs) in 2023 that model human speech and text. Analogies between modelling current AI applications learning processes children appear discussions versus machine intelligence creativity. The anthropomorphizing perspective employed these debates is the legacy ‘Turing Test’, but also notion animals, formerly colonized people, machines supposedly only imitate ‘correct’ language.

Язык: Английский

Процитировано

0

Large Language Models and theoretical linguistics DOI

Danny Fox,

Roni Katzir

Theoretical Linguistics, Год журнала: 2024, Номер 50(1-2), С. 71 - 76

Опубликована: Июнь 1, 2024

Abstract Some recent publications have made the suggestion that Large Language Models are not just successful engineering tools but also good theories of human linguistic cognition. This note reviews methodological and empirical reasons to reject this out hand.

Язык: Английский

Процитировано

2

Creative minds like ours? Large Language Models and the creative aspect of language use DOI Creative Commons
Vincent J. Carchidi

Biolinguistics, Год журнала: 2024, Номер 18

Опубликована: Окт. 29, 2024

Descartes famously constructed a language test to determine the existence of other minds. The made critical observations about how humans use that purportedly distinguishes them from animals and machines. These were carried into generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms “creative aspect use” (CALU). CALU refers stimulus - free, unbounded, yet appropriate language—a tripartite depiction whose function biolinguistics is highlight species-specific form intellectual freedom. This paper argues provides set facts have significant downstream effects on explanatory theory-construction. include internalist orientation linguistics, invocation competence-performance distinction, postulation faculty makes possible—but does not explain—CALU. It contrasts biolinguistic approach with recent wave enthusiasm for Transformer-based Large Language Models (LLMs) as tools, models, or theories human language, arguing such uses neglect these fundamental insights their detriment. that, absence replication, identification, accounting CALU, LLMs do match depth framework, thereby limiting theoretical usefulness.

Язык: Английский

Процитировано

1