Combining meta reinforcement learning with neural plasticity mechanisms for improved AI performance DOI Creative Commons
Liu Liu,

Zhifei Xu

PLoS ONE, Год журнала: 2025, Номер 20(5), С. e0320777 - e0320777

Опубликована: Май 15, 2025

This research explores the potential of combining Meta Reinforcement Learning (MRL) with Spike-Timing-Dependent Plasticity (STDP) to enhance performance and adaptability AI agents in Atari game settings. Our methodology leverages MRL swiftly adjust agent strategies across a range games, while STDP fine-tunes synaptic weights based on neuronal spike timings, which turn improves learning efficiency decision-making under changing conditions. A series experiments were conducted standard games compare hybrid MRL-STDP model against baseline models using traditional reinforcement techniques like Q-learning Deep Q-Networks. Various metrics, including speed, adaptability, cross-game generalization, evaluated. The results show that approach significantly accelerates agent’s ability reach competitive levels, 40% boost 35% increase over conventional models.

Язык: Английский

A novel diffusion model with Shapley value analysis for anomaly detection and identification of wind turbine DOI
Qingtao Yao,

Bohua Chen,

Aijun Hu

и другие.

Expert Systems with Applications, Год журнала: 2025, Номер 284, С. 127925 - 127925

Опубликована: Апрель 28, 2025

Язык: Английский

Процитировано

0

Combining meta reinforcement learning with neural plasticity mechanisms for improved AI performance DOI Creative Commons
Liu Liu,

Zhifei Xu

PLoS ONE, Год журнала: 2025, Номер 20(5), С. e0320777 - e0320777

Опубликована: Май 15, 2025

This research explores the potential of combining Meta Reinforcement Learning (MRL) with Spike-Timing-Dependent Plasticity (STDP) to enhance performance and adaptability AI agents in Atari game settings. Our methodology leverages MRL swiftly adjust agent strategies across a range games, while STDP fine-tunes synaptic weights based on neuronal spike timings, which turn improves learning efficiency decision-making under changing conditions. A series experiments were conducted standard games compare hybrid MRL-STDP model against baseline models using traditional reinforcement techniques like Q-learning Deep Q-Networks. Various metrics, including speed, adaptability, cross-game generalization, evaluated. The results show that approach significantly accelerates agent’s ability reach competitive levels, 40% boost 35% increase over conventional models.

Язык: Английский

Процитировано

0