“Personhood and AI: Why large language models don’t understand us” DOI
Jacob Browning

AI & Society, Год журнала: 2023, Номер 39(5), С. 2499 - 2506

Опубликована: Июль 12, 2023

Язык: Английский

Grasp-Anything: Large-scale Grasp Dataset from Foundation Models DOI

An Dinh Vuong,

Minh‐Ngoc Vu, Huy Quoc Le

и другие.

Опубликована: Май 13, 2024

Язык: Английский

Процитировано

5

What is it like to be a bot? The world according to GPT-4 DOI Creative Commons

Dan Lloyd

Frontiers in Psychology, Год журнала: 2024, Номер 15

Опубликована: Авг. 7, 2024

The recent explosion of Large Language Models (LLMs) has provoked lively debate about "emergent" properties the models, including intelligence, insight, creativity, and meaning. These debates are rocky for two main reasons: emergent sought not well-defined; grounds their dismissal often rest on a fallacious appeal to extraneous factors, like LLM training regime, or assumptions processes within model. latter issue is particular roadblock LLMs because internal largely unknown - they colossal black boxes. In this paper, I try cut through these problems by, first, identifying one salient feature shared by systems we regard as intelligent/conscious/sentient/etc., namely, responsiveness environmental conditions that may be near in space time. They engage with subjective worlds ("s-worlds") which conform actual environment. Observers can infer s-worlds from behavior alone, enabling hypotheses perception cognition do require evidence operations question. reconstruction offers framework comparing across species, affording new leverage possible sentience LLMs. Here, examine prominent LLM, OpenAI's GPT-4. Inquiry into emergence complex world facilitated philosophical phenomenology cognitive ethology, examining pattern errors made GPT-4 proposing origin absence an analogue human awareness This deficit suggests ultimately lacks capacity construct stable perceptual world; temporal vacuum undermines any consistent, continuously updated, model its Accordingly, none GPT-4's statements epistemically secure. Because anthropomorphic illusion so strong, conclude suggesting works users improvised fiction.

Язык: Английский

Процитировано

5

Theory Is All You Need: AI, Human Cognition, and Causal Reasoning DOI
Teppo Felin, Matthias Holweg

Strategy Science, Год журнала: 2024, Номер unknown

Опубликована: Дек. 3, 2024

Scholars argue that artificial intelligence (AI) can generate genuine novelty and new knowledge and, in turn, AI computational models of cognition will replace human decision making under uncertainty. We disagree. AI’s data-based prediction is different from theory-based causal logic reasoning. highlight problems with the decades-old analogy between computers minds as input–output devices, using large language an example. Human better conceptualized a form reasoning rather than emphasis on information processing prediction. uses probability-based approach to largely backward looking imitative, whereas forward-looking capable generating novelty. introduce idea data–belief asymmetries difference cognition, example heavier-than-air flight illustrate our arguments. Theory-based provides cognitive mechanism for humans intervene world engage directed experimentation data. Throughout article, we discuss implications argument understanding origins novelty, knowledge,

Язык: Английский

Процитировано

5

Engineering material failure analysis report generation based on QWen and Llama2 DOI Creative Commons

Sijie Chang,

Meng Wan,

Jiaxiang Wang

и другие.

Results in Engineering, Год журнала: 2025, Номер unknown, С. 104532 - 104532

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

“Personhood and AI: Why large language models don’t understand us” DOI
Jacob Browning

AI & Society, Год журнала: 2023, Номер 39(5), С. 2499 - 2506

Опубликована: Июль 12, 2023

Язык: Английский

Процитировано

12