Prognostic and power scheduling technique for EV using optimized adaptive deep belief network DOI

M. Hemalatha,

R. Rengaraj

Journal of Energy Storage, Год журнала: 2024, Номер 108, С. 114943 - 114943

Опубликована: Дек. 20, 2024

Язык: Английский

Enhanced Electric Vehicle charging strategy through Graph Convolutional Networks integrated with deep reinforcement learning DOI
Shilpa Ghode, Mayuri Digalwar

International Journal of Information Technology, Год журнала: 2025, Номер unknown

Опубликована: Фев. 6, 2025

Язык: Английский

Процитировано

1

Research on optimization of intelligent control systems based on deep reinforcement learning DOI Creative Commons
Tao Huang

Опубликована: Янв. 15, 2025

This study constructs an intelligent control system based on Deep Q-Network (DQN) to improve efficiency in complex dynamic environments. By employing methods such as input state preprocessing, action design, and reward function optimization, the achieves rapid convergence high-precision control. Through multiple simulation experiments, results show that proposed DQN model outperforms traditional Q-learning algorithm terms of average cumulative rewards, accuracy, energy consumption, demonstrating significant performance advantages. The indicates possesses good adaptability applications, providing important groundwork for future research.

Язык: Английский

Процитировано

0

Stable energy management for highway electric vehicle charging based on reinforcement learning DOI

Hongbin Xie,

Song Ge,

Zhuoran Shi

и другие.

Applied Energy, Год журнала: 2025, Номер 389, С. 125541 - 125541

Опубликована: Март 19, 2025

Язык: Английский

Процитировано

0

Prefrontal meta-control incorporating mental simulation enhances the adaptivity of reinforcement learning agents in dynamic environments DOI Creative Commons
J.-H. Kim, Jee Hang Lee

Frontiers in Computational Neuroscience, Год журнала: 2025, Номер 19

Опубликована: Март 27, 2025

Introduction Recent advances in computational neuroscience highlight the significance of prefrontal cortical meta-control mechanisms facilitating flexible and adaptive human behavior. In addition, hippocampal function, particularly mental simulation capacity, proves essential this process. Rooted from these neuroscientific insights, we present Meta-Dyna , a novel neuroscience-inspired reinforcement learning architecture that demonstrates rapid adaptation to environmental dynamics whilst managing variable goal states state-transition uncertainties. Methods This architectural framework implements integrated with replay which turn optimized task performance limited experiences. We evaluated approach through comprehensive experimental simulations across three distinct paradigms: two-stage Markov decision task, frequently serves decision-making research; stochastic GridWorldLoCA an established benchmark suite for model-based learning; Atari Pong variant incorporating multiple goals under uncertainty. Results Experimental results demonstrate 's superior compared baseline algorithms metrics: average reward, choice optimality, number trials success. Discussions These findings advance our understanding contributing development brain-inspired agents capable flexible, goal-directed behavior within dynamic environments.

Язык: Английский

Процитировано

0

Framework design and empirical analysis of intelligent scheduling system for high-altitude photovoltaic power generation based on mixed optimization of long-nosed raccoon optimization algorithm and black winged kite optimization algorithm (COA-BKA) DOI Creative Commons

Heng Hu,

Xiaoming Xiong, Shuang Wang

и другие.

Deleted Journal, Год журнала: 2025, Номер 28(1)

Опубликована: Апрель 25, 2025

Язык: Английский

Процитировано

0

Prognostic and power scheduling technique for EV using optimized adaptive deep belief network DOI

M. Hemalatha,

R. Rengaraj

Journal of Energy Storage, Год журнала: 2024, Номер 108, С. 114943 - 114943

Опубликована: Дек. 20, 2024

Язык: Английский

Процитировано

0