Beyond neurons: computer vision methods for analysis of morphologically complex astrocytes DOI Creative Commons
Tabish A. Syed, Mohammed Youssef, Alexandra L. Schober

et al.

Frontiers in Computer Science, Journal Year: 2024, Volume and Issue: 6

Published: Sept. 25, 2024

The study of the geometric organization biological tissues has a rich history in literature. However, geometry and architecture individual cells within traditionally relied upon manual or indirect measures shape. Such rudimentary are largely result challenges associated with acquiring high resolution images cellular components, as well lack computational approaches to analyze large volumes high-resolution data. This is especially true brain tissue, which composed complex array cells. Here we review tools that have been applied unravel nanoarchitecture astrocytes, type cell increasingly being shown be essential for function. Astrocytes among most structurally functionally diverse mammalian body partner neurons. Light microscopy does not allow adequate astrocyte morphology, however, large-scale serial electron data, provides nanometer 3D models, enabling visualization fine, convoluted structure astrocytes. Application computer vision methods resulting nanoscale models helping reveal organizing principles but complete understanding its functional implications will require further adaptation existing tools, development new approaches.

Language: Английский

Driving and suppressing the human language network using large language models DOI
Greta Tuckute, Aalok Sathe, Shashank Srikant

et al.

Nature Human Behaviour, Journal Year: 2024, Volume and Issue: 8(3), P. 544 - 561

Published: Jan. 3, 2024

Language: Английский

Citations

34

Transformers in Material Science: Roles, Challenges, and Future Scope DOI

Nitin Rane

SSRN Electronic Journal, Journal Year: 2023, Volume and Issue: unknown

Published: Jan. 1, 2023

This study explores the diverse applications, challenges, and future prospects of employing vision transformers in various material science domains, including biomaterials, ceramic materials, composite energy magnetic electronics photonic materials synthesis, polymers, nanomaterials. In realm application has significantly improved our understanding biological interactions, leading to development innovative medical implants drug delivery systems. these have revolutionized design production processes, ensuring higher durability efficiency. Likewise, they enabled creation lightweight yet robust structures, transforming industries from aerospace automotive. Energy research greatly benefited transformers, facilitating discovery novel for storage conversion. Additionally, been transformed by their ability analyze intricate patterns, aiding advanced data technologies. accelerated evolution compact high-performance devices. Integrating poses challenges managing vast datasets, model interpretability, addressing ethical concerns related privacy bias. As continue advance, nanomaterials is anticipated yield groundbreaking discoveries. highlights way forward, underscoring importance collaborative efforts between computer scientists researchers unlock full potential reshaping landscape science.

Language: Английский

Citations

27

Language in Brains, Minds, and Machines DOI
Greta Tuckute, Nancy Kanwisher, Evelina Fedorenko

et al.

Annual Review of Neuroscience, Journal Year: 2024, Volume and Issue: 47(1), P. 277 - 301

Published: April 26, 2024

It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey new purchase LMs are providing on question of how is implemented in brain. We discuss why, a priori, might be expected to share similarities with human system. then summarize evidence represent linguistic information similarly enough enable relatively accurate brain encoding decoding during processing. Finally, examine which LM properties—their architecture, task performance, or training—are critical capturing neural responses review studies using as silico model organisms testing hypotheses about These ongoing investigations bring us closer understanding representations processes underlie our ability comprehend sentences express thoughts

Language: Английский

Citations

13

Building transformers from neurons and astrocytes DOI Creative Commons
Leo Kozachkov, Ksenia V. Kastanenka, Dmitry Krotov

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2023, Volume and Issue: 120(34)

Published: Aug. 14, 2023

Glial cells account for between 50% and 90% of all human brain cells, serve a variety important developmental, structural, metabolic functions. Recent experimental efforts suggest that astrocytes, type glial cell, are also directly involved in core cognitive processes such as learning memory. While it is well established astrocytes neurons connected to one another feedback loops across many timescales spatial scales, there gap understanding the computational role neuron-astrocyte interactions. To help bridge this gap, we draw on recent advances AI astrocyte imaging technology. In particular, show networks can naturally perform computation Transformer, particularly successful architecture. doing so, provide concrete, normative, experimentally testable communication. Because Transformers so wide task domains, language, vision, audition, our analysis may explain ubiquity, flexibility, power brain's networks.

Language: Английский

Citations

20

Driving and suppressing the human language network using large language models DOI Creative Commons
Greta Tuckute, Aalok Sathe, Shashank Srikant

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: April 16, 2023

Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude response associated with each sentence. Then, use identify new sentences predicted drive or suppress in network. We these model-selected novel indeed strongly activity areas individuals. A systematic analysis reveals surprisal well-formedness linguistic input key determinants strength These results establish ability neural network not only mimic but also noninvasively control higher-level cortical areas, like

Language: Английский

Citations

14

Direct training high-performance deep spiking neural networks: a review of theories and methods DOI Creative Commons
Chenlin Zhou, Han Zhang,

Liutao Yu

et al.

Frontiers in Neuroscience, Journal Year: 2024, Volume and Issue: 18

Published: July 31, 2024

Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial (ANNs), in virtue of their high biological plausibility, rich spatial-temporal dynamics, and event-driven computation. The direct training algorithms based on the surrogate gradient method provide sufficient flexibility design novel SNN architectures explore dynamics SNNs. According previous studies, performance models is highly dependent sizes. Recently, deep SNNs have achieved great progress both neuromorphic datasets large-scale static datasets. Notably, transformer-based show comparable with ANN counterparts. In this paper, we new perspective summarize theories methods for systematic comprehensive way, including theory fundamentals, spiking neuron models, advanced residual architectures, software frameworks hardware, applications, future trends.

Language: Английский

Citations

6

SGLFormer: Spiking Global-Local-Fusion Transformer with high performance DOI Creative Commons
Han Zhang, Chenlin Zhou,

Liutao Yu

et al.

Frontiers in Neuroscience, Journal Year: 2024, Volume and Issue: 18

Published: March 12, 2024

Spiking Neural Networks (SNNs), inspired by brain science, offer low energy consumption and high biological plausibility with their event-driven nature. However, the current SNNs are still suffering from insufficient performance.

Language: Английский

Citations

5

Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding DOI Open Access
Ching Fang, Jack Lindsey,

Larry Abbott

et al.

Published: Jan. 9, 2025

Forming an episodic memory requires binding together disparate elements that co-occur in a single experience. One model of this process is neurons representing different components bind to “index” — subset unique memory. Evidence for has recently been found chickadees, which use hippocampal store and recall locations cached food. Chickadee hippocampus produces sparse, high-dimensional patterns (“barcodes”) uniquely specify each caching event. Unexpectedly, the same participate barcodes also exhibit conventional place tuning. It unknown how barcode activity generated, what role it plays formation retrieval. unclear index (e.g. barcodes) could function neural population represents content place). Here, we design biologically plausible generates uses them experiential content. Our from inputs through chaotic dynamics recurrent network Hebbian plasticity as attractor states. The matches experimental observations indices (barcodes) signals (place tuning) are randomly intermixed neurons. We demonstrate reduce interference between correlated experiences. show tuning complementary barcodes, enabling flexible, contextually-appropriate Finally, our compatible with previous models generating predictive map. Distinct indexing functions achieved via adjustment global gain. results suggest may resolve fundamental tensions specificity (pattern separation) flexible completion) general systems.

Language: Английский

Citations

0

Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding DOI Open Access
Ching Fang, Jack Lindsey,

Larry Abbott

et al.

Published: Jan. 9, 2025

Forming an episodic memory requires binding together disparate elements that co-occur in a single experience. One model of this process is neurons representing different components bind to “index” — subset unique memory. Evidence for has recently been found chickadees, which use hippocampal store and recall locations cached food. Chickadee hippocampus produces sparse, high-dimensional patterns (“barcodes”) uniquely specify each caching event. Unexpectedly, the same participate barcodes also exhibit conventional place tuning. It unknown how barcode activity generated, what role it plays formation retrieval. unclear index (e.g. barcodes) could function neural population represents content place). Here, we design biologically plausible generates uses them experiential content. Our from inputs through chaotic dynamics recurrent network Hebbian plasticity as attractor states. The matches experimental observations indices (barcodes) signals (place tuning) are randomly intermixed neurons. We demonstrate reduce interference between correlated experiences. show tuning complementary barcodes, enabling flexible, contextually-appropriate Finally, our compatible with previous models generating predictive map. Distinct indexing functions achieved via adjustment global gain. results suggest may resolve fundamental tensions specificity (pattern separation) flexible completion) general systems.

Language: Английский

Citations

0

Neural learning rules from associative networks theory DOI
Daniele Lotito

Neurocomputing, Journal Year: 2025, Volume and Issue: unknown, P. 129865 - 129865

Published: March 1, 2025

Language: Английский

Citations

0