Stable maintenance of multiple representational formats in human visual short-term memory DOI Open Access
Jing Liu, Hui Zhang, Tao Yu

и другие.

Proceedings of the National Academy of Sciences, Год журнала: 2020, Номер 117(51), С. 32329 - 32339

Опубликована: Дек. 7, 2020

Significance Visual short-term memory (VSTM) is the ability to actively maintain visual information for a short period of time. Classical models posit that VSTM achieved via persistent firing neurons in prefrontal cortex. Leveraging unique spatiotemporal resolution intracranial EEG recordings and analytical power deep neural network uncovering code processing, our results suggest first dynamically extracted multiple representational formats, including higher-order format abstract semantic format. Both formats are stably maintained across an extended coupling phases hippocampal low-frequency activity. These human highly dynamic involves rich multifaceted representations, which contribute mechanistic understanding VSTM.

Язык: Английский

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks DOI
Wei Fang, Zhaofei Yu, Yanqi Chen

и другие.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Год журнала: 2021, Номер unknown, С. 2641 - 2651

Опубликована: Окт. 1, 2021

Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility. However, the formulation of efficient high-performance learning algorithms for SNNs is still challenging. Most existing methods learn weights only, require manual tuning membrane-related parameters that determine dynamics a single spiking neuron. These are typically chosen be same all neurons, which limits diversity neurons thus expressiveness resulting SNNs. In this paper, we take inspiration from observation different across brain regions, propose training algorithm capable not only synaptic but also membrane time constants We show incorporating learnable can make network less sensitive initial values speed up learning. addition, reevaluate pooling in find max-pooling will lead significant loss advantage computation cost binary compatibility. evaluate proposed method image classification tasks on both traditional static MNIST, Fashion-MNIST, CIFAR-10 datasets, neuromorphic N-MNIST, CIFAR10-DVS, DVS128 Gesture datasets. The experiment results outperforms state-of-the-art accuracy nearly using fewer time-steps. Our codes available at https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron.

Язык: Английский

Процитировано

346

Artificial Neural Networks for Neuroscientists: A Primer DOI Creative Commons
Guangyu Robert Yang, Xiao‐Jing Wang

Neuron, Год журнала: 2020, Номер 107(6), С. 1048 - 1070

Опубликована: Сен. 1, 2020

Язык: Английский

Процитировано

299

Spine dynamics in the brain, mental disorders and artificial neural networks DOI
Haruo Kasai, Noam Ziv, Hitoshi Okazaki

и другие.

Nature reviews. Neuroscience, Год журнала: 2021, Номер 22(7), С. 407 - 422

Опубликована: Май 28, 2021

Язык: Английский

Процитировано

147

The neuroconnectionist research programme DOI
Adrien Doerig,

Rowan P. Sommers,

Katja Seeliger

и другие.

Nature reviews. Neuroscience, Год журнала: 2023, Номер 24(7), С. 431 - 450

Опубликована: Май 30, 2023

Язык: Английский

Процитировано

136

Meta-learning in natural and artificial intelligence DOI
Jane X. Wang

Current Opinion in Behavioral Sciences, Год журнала: 2021, Номер 38, С. 90 - 95

Опубликована: Янв. 25, 2021

Язык: Английский

Процитировано

123

Taking stock of value in the orbitofrontal cortex DOI

Eric B. Knudsen,

Joni D. Wallis

Nature reviews. Neuroscience, Год журнала: 2022, Номер 23(7), С. 428 - 438

Опубликована: Апрель 25, 2022

Язык: Английский

Процитировано

93

Predictive maps in rats and humans for spatial navigation DOI Creative Commons
William de Cothi, Nils Nyberg, Eva‐Maria Griesbauer

и другие.

Current Biology, Год журнала: 2022, Номер 32(17), С. 3676 - 3689.e5

Опубликована: Июль 20, 2022

tested humans, rats, and RL agents on a novel modular maze d Humans rats were remarkably similar in their choice of trajectories Both species most to utilizing SR also displayed features model-based planning early trials

Язык: Английский

Процитировано

72

Brain-inspired learning in artificial neural networks: A review DOI Creative Commons
Samuel Schmidgall,

Rojin Ziaei,

Jascha Achterberg

и другие.

APL Machine Learning, Год журнала: 2024, Номер 2(2)

Опубликована: Май 9, 2024

Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, robotics. However, there exist fundamental differences between ANNs’ operating mechanisms those of the biological brain, particularly concerning learning processes. This paper presents a comprehensive review current brain-inspired representations artificial networks. We investigate integration more biologically plausible mechanisms, such synaptic plasticity, to improve these networks’ capabilities. Moreover, we delve into potential advantages challenges accompanying this approach. In review, pinpoint promising avenues for future research rapidly advancing field, which could bring us closer understanding essence intelligence.

Язык: Английский

Процитировано

41

Deep learning for steganalysis of diverse data types: A review of methods, taxonomy, challenges and future directions DOI
Hamza Kheddar, Mustapha Hemis, Yassine Himeur

и другие.

Neurocomputing, Год журнала: 2024, Номер 581, С. 127528 - 127528

Опубликована: Март 6, 2024

Язык: Английский

Процитировано

21

Language writ large: LLMs, ChatGPT, meaning, and understanding DOI Creative Commons
Stevan Harnad

Frontiers in Artificial Intelligence, Год журнала: 2025, Номер 7

Опубликована: Фев. 12, 2025

Apart from what (little) OpenAI may be concealing us, we all know (roughly) how Large Language Models (LLMs) such as ChatGPT work (their vast text databases, statistics, vector representations, and huge number of parameters, next-word training, etc.). However, none us can say (hand on heart) that are not surprised by has proved to able do with these resources. This even driven some conclude actually understands. It is true it But also understand do. I will suggest hunches about benign “biases”—convergent constraints emerge at the LLM scale helping so much better than would have expected. These biases inherent in nature language itself, scale, they closely linked lacks , which direct sensorimotor grounding connect its words their referents propositions meanings. convergent related (1) parasitism indirect verbal grounding, (2) circularity definition, (3) “mirroring” production comprehension, (4) iconicity (5) computational counterparts human “categorical perception” category learning neural nets, perhaps (6) a conjecture Chomsky laws thought. The exposition form dialogue ChatGPT-4.

Язык: Английский

Процитировано

3