A Constraint Enforcement Deep Reinforcement Learning Framework for Optimal Energy Storage Systems Dispatch DOI
Shengren Hou, Edgar Mauricio Salazar Duque, Peter Pálenský

et al.

Published: Jan. 1, 2023

The optimal dispatch of energy storage systems (ESSs) presents formidable challenges due to the uncertainty introduced by fluctuations in dynamic prices, demand consumption, and renewable-based generation. By exploiting generalization capabilities deep neural networks (DNNs), reinforcement learning (DRL) algorithms can learn good-quality control models that adaptively respond distribution networks' stochastic nature. However, current standard DRL are limited constraint satisfaction unable provide feasible actions. To address this issue, we propose a framework effectively handles continuous action spaces while strictly enforcing environments space operational constraints during online operation. Firstly, proposed trains an action-value function modeled using DNNs. Subsequently, is formulated as mixed-integer programming (MIP) formulation, enabling consideration environment's constraints. Comprehensive numerical simulations show superior performance MIP-DRL framework, all delivering high-quality decisions when compared with state-of-the-art solution obtained perfect forecast variables.

Language: Английский

Safe reinforcement learning based optimal low-carbon scheduling strategy for multi-energy system DOI
Jiang Fu, Jie Chen,

Jieqi Rong

et al.

Sustainable Energy Grids and Networks, Journal Year: 2024, Volume and Issue: 39, P. 101454 - 101454

Published: June 20, 2024

Language: Английский

Citations

1

Why Reinforcement Learning in Energy Systems Needs Explanations DOI
Hallah Shahid Butt, Benjamin Schäfer

Published: April 20, 2024

Language: Английский

Citations

0

Optimal Energy Scheduling of a Microgrid based on Offline-to-Online Deep Reinforcement Learning DOI

Aihui Yang,

Zhiyao Lin,

Ke Lin

et al.

Published: June 21, 2024

Language: Английский

Citations

0

Integrating machine learning and operations research methods for scheduling problems: a bibliometric analysis and literature review DOI Open Access

Ayoub Ouhadi,

Zakaria Yahouni, Maria Di Mascolo

et al.

IFAC-PapersOnLine, Journal Year: 2024, Volume and Issue: 58(19), P. 946 - 951

Published: Jan. 1, 2024

Language: Английский

Citations

0

Multi-objective two-stage robust optimization of wind/PV/thermal power system based on meta multi-agent reinforcement learning DOI Creative Commons
Dengao Li,

Zhuokai Zhang,

Ding Feng

et al.

International Journal of Electrical Power & Energy Systems, Journal Year: 2024, Volume and Issue: 162, P. 110273 - 110273

Published: Oct. 1, 2024

Language: Английский

Citations

0

Research progress and prospects of machine learning applications in renewable energy: a comprehensive bibliometric-based review DOI
Xuping Wang, Yong Shen,

Chang Su

et al.

International Journal of Environmental Science and Technology, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 19, 2024

Language: Английский

Citations

0

Optimization-Based Grid Energy Management System with BESS and EV Charging Load for Peak Shaving DOI
Shreyas Kulkarni,

Sheetal Kumar K R,

Akhash Vellandurai

et al.

2021 IEEE International Conference on Big Data (Big Data), Journal Year: 2024, Volume and Issue: unknown, P. 5176 - 5183

Published: Dec. 15, 2024

Language: Английский

Citations

0

Optimizing intelligent startup strategy of power system using PPO algorithm DOI
Yan Sun,

Yin Chao Wu,

Yan Wu

et al.

Intelligent Decision Technologies, Journal Year: 2024, Volume and Issue: 18(4), P. 3091 - 3104

Published: Nov. 1, 2024

This article aimed to use the proximal policy optimization (PPO) algorithm address limitations of power system startup strategies, enhance adaptability, coping ability, and overall robustness variable grid demand integrated renewable energy, constraints in start-up strategy are optimized. Firstly, this constructed a dynamic model system, including key components such as generators, transformers, transmission lines; secondly, it PPO designed interfaces that allow interact with model; afterward, state variables were determined, reward function was evaluate efficiency stability system. Next, adjusted trained iterated multiple times simulation environment guide learn optimal strategy. Finally, an effective evaluation can be conducted. The research results showed after by algorithm, stable frequency only took about 23 seconds, recovery time reduced 33.3% under sudden load increase. used significantly optimize intelligent

Language: Английский

Citations

0

Real-Time Power Optimal Schedule Method for Energy Internet Based on LSTM Encoding DOI

Jiaxin Wang,

Siya Xu, Xuesong Qiu

et al.

2022 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Journal Year: 2024, Volume and Issue: unknown, P. 1 - 6

Published: June 19, 2024

Language: Английский

Citations

0

MicroPPO: Safe Power Flow Management in Decentralized Micro-Grids with Proximal Policy Optimization DOI

Daniel Ebi,

Edouard Fouché, Marco Heyden

et al.

2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), Journal Year: 2024, Volume and Issue: unknown, P. 1 - 10

Published: Oct. 6, 2024

Language: Английский

Citations

0