Trends in Cognitive Sciences, Journal Year: 2021, Volume and Issue: 26(1), P. 81 - 96
Published: Nov. 16, 2021
Language: Английский
Trends in Cognitive Sciences, Journal Year: 2021, Volume and Issue: 26(1), P. 81 - 96
Published: Nov. 16, 2021
Language: Английский
Nature reviews. Neuroscience, Journal Year: 2020, Volume and Issue: 22(1), P. 55 - 67
Published: Nov. 16, 2020
Language: Английский
Citations
343Frontiers in Computational Neuroscience, Journal Year: 2020, Volume and Issue: 14
Published: April 16, 2020
Attention is the important ability to flexibly control limited computational resources. It has been studied in conjunction with many other topics neuroscience and psychology including awareness, vigilance, saliency, executive control, learning. also recently applied several domains machine The relationship between study of biological attention its use as a tool enhance artificial neural networks not always clear. This review starts by providing an overview how conceptualized literature. then covers cases learning, indicating their counterparts where they exist. Finally, ways which can be further inspired biology for production complex integrative systems explored.
Language: Английский
Citations
246Nature reviews. Neuroscience, Journal Year: 2023, Volume and Issue: 24(7), P. 431 - 450
Published: May 30, 2023
Language: Английский
Citations
133Nature Communications, Journal Year: 2021, Volume and Issue: 12(1)
Published: Nov. 9, 2021
Abstract In order to better understand how the brain perceives faces, it is important know what objective drives learning in ventral visual stream. To answer this question, we model neural responses faces macaque inferotemporal (IT) cortex with a deep self-supervised generative model, β -VAE, which disentangles sensory data into interpretable latent factors, such as gender or age. Our results demonstrate strong correspondence between factors discovered by -VAE and those coded single IT neurons, beyond that found for baselines, including handcrafted state-of-the-art of face perception, Active Appearance Model, classifiers. Moreover, able reconstruct novel images using signals from just handful cells. Together our imply optimising disentangling leads representations closely resemble at unit level. This points plausible brain.
Language: Английский
Citations
115Circulation, Journal Year: 2021, Volume and Issue: 143(13), P. 1274 - 1286
Published: Feb. 1, 2021
Heart rate-corrected QT interval (QTc) prolongation, whether secondary to drugs, genetics including congenital long syndrome, and/or systemic diseases SARS-CoV-2-mediated coronavirus disease 2019 (COVID-19), can predispose ventricular arrhythmias and sudden cardiac death. Currently, QTc assessment monitoring relies largely on 12-lead electrocardiography. As such, we sought train validate an artificial intelligence (AI)-enabled ECG algorithm determine the QTc, then prospectively test this tracings acquired from a mobile (mECG) device in population enriched for repolarization abnormalities.Using >1.6 million ECGs 538 200 patients, deep neural network (DNN) was derived (patients training, n = 250 767; patients testing, 107 920) validated (n 179 513 patients) predict using cardiologist-overread values as "gold standard". The ability of DNN detect clinically-relevant prolongation (eg, ≥500 ms) tested 686 with genetic heart (50% syndrome) obtained both prototype mECG equivalent commercially-available AliveCor KardiaMobile 6L.In validation sample, strong agreement observed between human over-read DNN-predicted (-1.76±23.14 ms). Similarly, within prospective, disease-enriched dataset, difference those annotated by expert (-0.45±24.73 commercial core laboratory [10.52±25.64 ms] nominal. When applied tracings, DNN's value ms yielded area under curve, sensitivity, specificity 0.97, 80.0%, 94.4%, respectively.Using smartphone-enabled electrodes, AI accurately standard ECG. estimation AI-enabled may provide cost-effective means screening syndrome variety clinical settings where electrocardiography is not accessible or cost-effective.
Language: Английский
Citations
106Neural Computation, Journal Year: 2022, Volume and Issue: 34(6), P. 1329 - 1368
Published: May 9, 2022
Abstract Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer perceptrons (MLPs) can be approximated using predictive coding, biologically plausible process theory cortical computation relies solely on local and Hebbian updates. The power backprop, however, lies not its instantiation MLPs but the concept automatic differentiation, which allows optimization any differentiable program expressed as graph. Here, we demonstrate coding converges asymptotically (and practice, rapidly) to exact gradients arbitrary graphs only rules. We apply this result develop straightforward strategy translate core into their equivalents. construct convolutional neural networks, recurrent more complex long short-term memory, include nonlayer-like branching internal graph structure multiplicative interactions. Our models perform equivalently challenging benchmarks while (mostly) plasticity. method raises potential standard algorithms could principle directly implemented circuitry may also contribute development completely distributed neuromorphic architectures.
Language: Английский
Citations
70Neuron, Journal Year: 2023, Volume and Issue: 111(9), P. 1504 - 1516.e9
Published: March 9, 2023
Human understanding of the world can change rapidly when new information comes to light, such as a plot twist occurs in work fiction. This flexible "knowledge assembly" requires few-shot reorganization neural codes for relations among objects and events. However, existing computational theories are largely silent about how this could occur. Here, participants learned transitive ordering novel within two distinct contexts before exposure knowledge that revealed they were linked. Blood-oxygen-level-dependent (BOLD) signals dorsal frontoparietal cortical areas dramatically rearranged on manifold after minimal linking information. We then adapt online stochastic gradient descent permit similar rapid assembly network model.
Language: Английский
Citations
43Proceedings of the National Academy of Sciences, Journal Year: 2020, Volume and Issue: 117(43), P. 26562 - 26571
Published: Oct. 13, 2020
Does the human mind resemble machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict brain activity—raising exciting possibility such represent world we do. However, seemingly intelligent fail strange “unhumanlike” ways, threatening their status as models our minds. How know when human–machine behavioral differences reflect deep disparities underlying capacities, vs. failures are only superficial or peripheral? This article draws on a foundational insight from cognitive science—the distinction between performance competence —to encourage “species-fair” comparisons humans machines. The performance/competence urges us to consider whether failure system ideally hypothesized, one creature another, arises not because lacks relevant knowledge internal capacities (“competence”), but instead constraints demonstrating (“performance”). I argue this has been neglected by research comparing machine behavior, it should be essential any comparison. Focusing domain image classification, identify three factors contributing species-fairness comparisons, extracted recent work equates constraints. Species-fair level playing field natural artificial intelligence, so separate more those may enduring.
Language: Английский
Citations
125Journal of Cognitive Neuroscience, Journal Year: 2021, Volume and Issue: unknown, P. 1 - 21
Published: July 17, 2021
Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual cortex. What remains unclear is how strongly experimental choices, such as network architecture, training, and fitting to brain data, contribute observed similarities. Here, we compare a diverse set nine DNN architectures their ability explain representational geometry 62 images in human inferior temporal cortex (hIT), measured with fMRI. We untrained task-trained counterparts assess effect cross-validated hIT, by taking weighted combination principal components features within each layer and, subsequently, layers. For training fitting, test all for correlation hIT dissimilarity matrix, using independent subjects. Trained outperform (accounting 57% more explainable variance), suggesting that structured are important explaining hIT. Model further improves alignment representations (by 124%), relative prevalence different does not readily emerge from Imagenet object-recognition task used train networks. The same can also disparate primary (V1), where stronger weights given earlier In region, achieved equivalently high performance once fitted. models' shared properties-deep feedforward hierarchies spatially restricted nonlinear filters-seem than differences, when modeling representations.
Language: Английский
Citations
79Nature Human Behaviour, Journal Year: 2021, Volume and Issue: 5(10), P. 1402 - 1417
Published: May 6, 2021
Abstract Reflectance, lighting and geometry combine in complex ways to create images. How do we disentangle these perceive individual properties, such as surface glossiness? We suggest that brains properties by learning model statistical structure proximal To test this hypothesis, trained unsupervised generative neural networks on renderings of glossy surfaces compared their representations with human gloss judgements. The spontaneously cluster images according distal reflectance illumination, despite receiving no explicit information about properties. Intriguingly, the resulting also predict specific patterns ‘successes’ ‘errors’ perception. Linearly decoding specular from model’s internal code predicts perception better than ground truth, supervised or control models, it predicts, an image-by-image basis, illusions caused interactions between material, shape lighting. Unsupervised may underlie many perceptual dimensions vision beyond.
Language: Английский
Citations
60