A Constraint Enforcement Deep Reinforcement Learning Framework for Optimal Energy Storage Systems Dispatch DOI
Shengren Hou, Edgar Mauricio Salazar Duque, Peter Pálenský

et al.

Published: Jan. 1, 2023

The optimal dispatch of energy storage systems (ESSs) presents formidable challenges due to the uncertainty introduced by fluctuations in dynamic prices, demand consumption, and renewable-based generation. By exploiting generalization capabilities deep neural networks (DNNs), reinforcement learning (DRL) algorithms can learn good-quality control models that adaptively respond distribution networks' stochastic nature. However, current standard DRL are limited constraint satisfaction unable provide feasible actions. To address this issue, we propose a framework effectively handles continuous action spaces while strictly enforcing environments space operational constraints during online operation. Firstly, proposed trains an action-value function modeled using DNNs. Subsequently, is formulated as mixed-integer programming (MIP) formulation, enabling consideration environment's constraints. Comprehensive numerical simulations show superior performance MIP-DRL framework, all delivering high-quality decisions when compared with state-of-the-art solution obtained perfect forecast variables.

Language: Английский

Real-Time Energy Management in Smart Homes Through Deep Reinforcement Learning DOI Creative Commons
Jamal Aldahmashi, Xiandong Ma

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 43155 - 43172

Published: Jan. 1, 2024

In light of the growing prevalence distributed energy resources, storage systems (ESs), and electric vehicles (EVs) at residential scale, home management (HEM) have become instrumental in amplifying economic advantages for consumers. These traditionally prioritize curtailing active power consumption, often an expense overlooking reactive power. A significant imbalance between can detrimentally impact factor home-to-grid interface. This research presents innovative strategy designed to optimize performance HEM systems, ensuring they not only meet financial operational goals but also enhance factor. The approach involves strategic operation flexible loads, meticulous control thermostatic load line with user preferences, precise determination values both ES EV. optimizes cost savings augments Recognizing uncertainties behaviors, renewable generations, external temperature fluctuations, our model employs a Markov decision process depiction. Moreover, advances model-free system grounded deep reinforcement learning, thereby offering notable proficiency handling multifaceted nature smart settings real-time optimal scheduling. Comprehensive assessments using real-world datasets validate approach. Notably, proposed methodology elevate from 0.44 0.9 achieve 31.5% reduction electricity bills, while upholding consumer satisfaction.

Language: Английский

Citations

9

A Comprehensive Review of Artificial Intelligence Approaches for Smart Grid Integration and Optimization DOI Creative Commons
Malik Ali Judge, Vincenzo Franzitta, Domenico Curto

et al.

Energy Conversion and Management X, Journal Year: 2024, Volume and Issue: 24, P. 100724 - 100724

Published: Oct. 1, 2024

Language: Английский

Citations

5

Optimal AC Power Flow with Energy Storage and Renewable Energy: An Efficient RL Algorithm Capable of Handling Equality Constraints DOI Creative Commons
Mingde Liu, Jianquan Zhu, Mingbo Liu

et al.

International Journal of Computational Intelligence Systems, Journal Year: 2025, Volume and Issue: 18(1)

Published: Feb. 10, 2025

Language: Английский

Citations

0

Dynamic Dual-Strategy Update of Offline-to-Online Reinforcement Learning for Optimal Energy System Scheduling DOI

Zhiyao Lin,

Aihui Yang,

Li Liu

et al.

Lecture notes in electrical engineering, Journal Year: 2025, Volume and Issue: unknown, P. 35 - 43

Published: Jan. 1, 2025

Language: Английский

Citations

0

AI Technologies and Their Applications in Small-Scale Electric Power Systems DOI Creative Commons
Arqum Shahid, Freddy Plaum, Tarmo Korõtko

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 109984 - 110001

Published: Jan. 1, 2024

As the landscape of electric power systems is transforming towards decentralization, small-scale have garnered increased attention. Meanwhile, proliferation artificial intelligence (AI) technologies has provided new opportunities for system management. Thus, this review paper examines AI technology applications and their range uses in electrical systems. First, a brief overview evolution importance integration given. The background section explains principles systems, including stand-alone grid-interactive microgrids, hybrid virtual plants. A thorough analysis conducted on effects aspects such as energy consumption, demand response, grid management, operation, generation, storage. Based foundation, Acceleration Performance Indicators (AAPIs) are developed to establish standardized framework evaluating comparing different studies. AAPI considers binary scoring five quantitative Key (KPIs) qualitative KPIs examined through three-tiered scale – established, evolved, emerging.

Language: Английский

Citations

3

A Reinforcement Learning Approach to Augment Conventional PID Control in Nuclear Power Plant Transient Operation DOI
Aidan Rigby, Michael J. Wagner, Daniel Mikkelson

et al.

Nuclear Technology, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 19

Published: May 7, 2025

Language: Английский

Citations

0

Size optimization of standalone wind-photovoltaics-diesel-battery systems by Harris hawks optimization (HHO): Case study of a wharf located in Bushehr, Iran DOI Creative Commons

Kamyar Fakhfour,

Fathollah Pourfayaz

International Journal of Electrical Power & Energy Systems, Journal Year: 2024, Volume and Issue: 163, P. 110353 - 110353

Published: Nov. 6, 2024

Language: Английский

Citations

2

A Constraint Enforcement Deep Reinforcement Learning Framework for Optimal Energy Storage Systems Dispatch DOI
Shengren Hou, Edgar Mauricio Salazar Duque, Peter Pálenský

et al.

Published: Jan. 1, 2024

The optimal dispatch of energy storage systems (ESSs) presents formidable challenges due to the uncertainty introduced by fluctuations in dynamic prices, demand consumption, and renewable-based generation. By exploiting generalization capabilities deep neural networks (DNNs), reinforcement learning (DRL) algorithms can learn good-quality control models that adaptively respond distribution networks' stochastic nature. However, a significant limitation current DRL is their constraint satisfaction capability, particularly providing feasible actions during online operations. This aspect critically important because typically follow process offline training, where are developed tested simulated environment before being executed online, ability adhere constraints paramount for practical applicability system's reliability. To address this issue, we propose framework effectively handles continuous action spaces while strictly enforcing environments space operational operation. Firstly, proposed trains an action-value function modeled using DNNs. Subsequently, formulated as mixed-integer programming (MIP) formulation, enabling consideration environment's constraints. Comprehensive numerical simulations show superior performance MIP-DRL framework, all delivering high-quality decisions when compared with state-of-the-art solution obtained perfect forecast variables.

Language: Английский

Citations

1

Microgrid economic dispatch using Information-Enhanced Deep Reinforcement Learning with consideration of control periods DOI
Wenzhao Liu,

Z. Mao

Electric Power Systems Research, Journal Year: 2024, Volume and Issue: 239, P. 111244 - 111244

Published: Nov. 21, 2024

Language: Английский

Citations

1

RL-ADN: A high-performance Deep Reinforcement Learning environment for optimal Energy Storage Systems dispatch in active distribution networks DOI Creative Commons
Shengren Hou, Shuyi Gao, Weijie Xia

et al.

Energy and AI, Journal Year: 2024, Volume and Issue: unknown, P. 100457 - 100457

Published: Dec. 1, 2024

Language: Английский

Citations

1