The
optimal
dispatch
of
energy
storage
systems
(ESSs)
presents
formidable
challenges
due
to
the
uncertainty
introduced
by
fluctuations
in
dynamic
prices,
demand
consumption,
and
renewable-based
generation.
By
exploiting
generalization
capabilities
deep
neural
networks
(DNNs),
reinforcement
learning
(DRL)
algorithms
can
learn
good-quality
control
models
that
adaptively
respond
distribution
networks'
stochastic
nature.
However,
current
standard
DRL
are
limited
constraint
satisfaction
unable
provide
feasible
actions.
To
address
this
issue,
we
propose
a
framework
effectively
handles
continuous
action
spaces
while
strictly
enforcing
environments
space
operational
constraints
during
online
operation.
Firstly,
proposed
trains
an
action-value
function
modeled
using
DNNs.
Subsequently,
is
formulated
as
mixed-integer
programming
(MIP)
formulation,
enabling
consideration
environment's
constraints.
Comprehensive
numerical
simulations
show
superior
performance
MIP-DRL
framework,
all
delivering
high-quality
decisions
when
compared
with
state-of-the-art
solution
obtained
perfect
forecast
variables.
IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 43155 - 43172
Published: Jan. 1, 2024
In
light
of
the
growing
prevalence
distributed
energy
resources,
storage
systems
(ESs),
and
electric
vehicles
(EVs)
at
residential
scale,
home
management
(HEM)
have
become
instrumental
in
amplifying
economic
advantages
for
consumers.
These
traditionally
prioritize
curtailing
active
power
consumption,
often
an
expense
overlooking
reactive
power.
A
significant
imbalance
between
can
detrimentally
impact
factor
home-to-grid
interface.
This
research
presents
innovative
strategy
designed
to
optimize
performance
HEM
systems,
ensuring
they
not
only
meet
financial
operational
goals
but
also
enhance
factor.
The
approach
involves
strategic
operation
flexible
loads,
meticulous
control
thermostatic
load
line
with
user
preferences,
precise
determination
values
both
ES
EV.
optimizes
cost
savings
augments
Recognizing
uncertainties
behaviors,
renewable
generations,
external
temperature
fluctuations,
our
model
employs
a
Markov
decision
process
depiction.
Moreover,
advances
model-free
system
grounded
deep
reinforcement
learning,
thereby
offering
notable
proficiency
handling
multifaceted
nature
smart
settings
real-time
optimal
scheduling.
Comprehensive
assessments
using
real-world
datasets
validate
approach.
Notably,
proposed
methodology
elevate
from
0.44
0.9
achieve
31.5%
reduction
electricity
bills,
while
upholding
consumer
satisfaction.
IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 109984 - 110001
Published: Jan. 1, 2024
As
the
landscape
of
electric
power
systems
is
transforming
towards
decentralization,
small-scale
have
garnered
increased
attention.
Meanwhile,
proliferation
artificial
intelligence
(AI)
technologies
has
provided
new
opportunities
for
system
management.
Thus,
this
review
paper
examines
AI
technology
applications
and
their
range
uses
in
electrical
systems.
First,
a
brief
overview
evolution
importance
integration
given.
The
background
section
explains
principles
systems,
including
stand-alone
grid-interactive
microgrids,
hybrid
virtual
plants.
A
thorough
analysis
conducted
on
effects
aspects
such
as
energy
consumption,
demand
response,
grid
management,
operation,
generation,
storage.
Based
foundation,
Acceleration
Performance
Indicators
(AAPIs)
are
developed
to
establish
standardized
framework
evaluating
comparing
different
studies.
AAPI
considers
binary
scoring
five
quantitative
Key
(KPIs)
qualitative
KPIs
examined
through
three-tiered
scale
–
established,
evolved,
emerging.
The
optimal
dispatch
of
energy
storage
systems
(ESSs)
presents
formidable
challenges
due
to
the
uncertainty
introduced
by
fluctuations
in
dynamic
prices,
demand
consumption,
and
renewable-based
generation.
By
exploiting
generalization
capabilities
deep
neural
networks
(DNNs),
reinforcement
learning
(DRL)
algorithms
can
learn
good-quality
control
models
that
adaptively
respond
distribution
networks'
stochastic
nature.
However,
a
significant
limitation
current
DRL
is
their
constraint
satisfaction
capability,
particularly
providing
feasible
actions
during
online
operations.
This
aspect
critically
important
because
typically
follow
process
offline
training,
where
are
developed
tested
simulated
environment
before
being
executed
online,
ability
adhere
constraints
paramount
for
practical
applicability
system's
reliability.
To
address
this
issue,
we
propose
framework
effectively
handles
continuous
action
spaces
while
strictly
enforcing
environments
space
operational
operation.
Firstly,
proposed
trains
an
action-value
function
modeled
using
DNNs.
Subsequently,
formulated
as
mixed-integer
programming
(MIP)
formulation,
enabling
consideration
environment's
constraints.
Comprehensive
numerical
simulations
show
superior
performance
MIP-DRL
framework,
all
delivering
high-quality
decisions
when
compared
with
state-of-the-art
solution
obtained
perfect
forecast
variables.