Combining meta reinforcement learning with neural plasticity mechanisms for improved AI performance DOI Creative Commons
Liu Liu,

Zhifei Xu

PLoS ONE, Journal Year: 2025, Volume and Issue: 20(5), P. e0320777 - e0320777

Published: May 15, 2025

This research explores the potential of combining Meta Reinforcement Learning (MRL) with Spike-Timing-Dependent Plasticity (STDP) to enhance performance and adaptability AI agents in Atari game settings. Our methodology leverages MRL swiftly adjust agent strategies across a range games, while STDP fine-tunes synaptic weights based on neuronal spike timings, which turn improves learning efficiency decision-making under changing conditions. A series experiments were conducted standard games compare hybrid MRL-STDP model against baseline models using traditional reinforcement techniques like Q-learning Deep Q-Networks. Various metrics, including speed, adaptability, cross-game generalization, evaluated. The results show that approach significantly accelerates agent’s ability reach competitive levels, 40% boost 35% increase over conventional models.

Language: Английский

A novel diffusion model with Shapley value analysis for anomaly detection and identification of wind turbine DOI
Qingtao Yao,

Bohua Chen,

Aijun Hu

et al.

Expert Systems with Applications, Journal Year: 2025, Volume and Issue: 284, P. 127925 - 127925

Published: April 28, 2025

Language: Английский

Citations

0

Combining meta reinforcement learning with neural plasticity mechanisms for improved AI performance DOI Creative Commons
Liu Liu,

Zhifei Xu

PLoS ONE, Journal Year: 2025, Volume and Issue: 20(5), P. e0320777 - e0320777

Published: May 15, 2025

This research explores the potential of combining Meta Reinforcement Learning (MRL) with Spike-Timing-Dependent Plasticity (STDP) to enhance performance and adaptability AI agents in Atari game settings. Our methodology leverages MRL swiftly adjust agent strategies across a range games, while STDP fine-tunes synaptic weights based on neuronal spike timings, which turn improves learning efficiency decision-making under changing conditions. A series experiments were conducted standard games compare hybrid MRL-STDP model against baseline models using traditional reinforcement techniques like Q-learning Deep Q-Networks. Various metrics, including speed, adaptability, cross-game generalization, evaluated. The results show that approach significantly accelerates agent’s ability reach competitive levels, 40% boost 35% increase over conventional models.

Language: Английский

Citations

0