Expanding horizons in reinforcement learning for curious exploration and creative planning DOI Open Access
Dale Zhou, Aaron M. Bornstein

Published: Nov. 15, 2023

Curiosity and creativity are expressions of the tradeoff between leveraging that with which we familiar or seeking out novelty. Through computational lens reinforcement learning, describe how formulating value information generation via their complementary effects on planning horizons formally captures a range solutions to striking this balance.

Language: Английский

Causal and Chronological Relationships Predict Memory Organization for Nonlinear Narratives DOI
James W. Antony,

Angelo Lozano,

Pahul Dhoat

et al.

Journal of Cognitive Neuroscience, Journal Year: 2024, Volume and Issue: 36(11), P. 2368 - 2385

Published: Jan. 1, 2024

Abstract While recounting an experience, one can employ multiple strategies to transition from part the next. For instance, if event was learned out of linear order, recall events according time they were (temporal), similar (semantic), occurring nearby in (chronological), or produced by current (causal). To disentangle importance these factors, we had participants watch nonlinear narrative, Memento, under different task instructions and presentation orders. each scene film, also separately computed semantic causal networks. We then contrasted evidence for temporal, semantic, chronological, during recall. Critically, there stronger chronological than temporal strategies. Moreover, outperformed even when asked film presented underscoring fundamental nature structure scaffolding understanding organizing Nevertheless, still marginally predicted transitions, suggesting it operates as a weak signal presence more salient forms structure. In addition, network properties memorability, including role incoming causes its outgoing effects. summary, findings highlight accounting complex, networks knowledge building memory.

Language: Английский

Citations

3

Temporal dynamics of model-based control reveal arbitration between multiple task representations DOI Open Access
Jungsun Yoo, Aaron M. Bornstein

Published: June 15, 2024

Predominant frameworks categorize decisions dichotomously (e.g. “goal-directed” vs. “habitual”; “model-based” “model-free”). However, extensive work has shown that many human behaviors exhibit features of both systems, such as those require foresight (a goal-directed feature) but are not sensitive to environmental perturbations during action execution rigidity characteristic habits). Here, we introduce and explain a new subdivision behaviors, linking the format in which decision-maker represented contingencies memory. We this distinction by employing novel variant standard, two-stage decision task, allows us behaviorally capture within- across-trial dynamics planning. jointly fit choices response times with computational model revealed how people select among multiple task representations planning environments differing state-space complexity. In particular, examined reliance on changed function experience, within-subject, complexity, across-subjects (total n = 426). show complexity environment experience given contingency structure inform kinds use make decisions: at early stages start “conjunctive” (combining co-occurring first-stage states) simpler environments, “separated” representation (splitting states according their second-step outcomes) is preferred more complex environments. With pattern reversed. Finally, shift governed change approaches optimizing reward rate: initially, focus minimizing uncertainty, once reached asymptote, they transition prioritizing efficiency. Taken together, only arbitrate between different modes control, also types for efficient

Language: Английский

Citations

1

Expanding horizons in reinforcement learning for curious exploration and creative planning DOI
Dale Zhou, Aaron M. Bornstein

Behavioral and Brain Sciences, Journal Year: 2024, Volume and Issue: 47

Published: Jan. 1, 2024

Curiosity and creativity are expressions of the trade-off between leveraging that with which we familiar or seeking out novelty. Through computational lens reinforcement learning, describe how formulating value information generation via their complementary effects on

Language: Английский

Citations

1

Expanding horizons in reinforcement learning for curious exploration and creative planning DOI Open Access
Dale Zhou, Aaron M. Bornstein

Published: Nov. 15, 2023

Curiosity and creativity are expressions of the tradeoff between leveraging that with which we familiar or seeking out novelty. Through computational lens reinforcement learning, describe how formulating value information generation via their complementary effects on planning horizons formally captures a range solutions to striking this balance.

Language: Английский

Citations

0