The
balance
scheduling
of
Printed
Circuit
Board
(PCB)
assembly
lines
plays
a
crucial
role
in
enhancing
production
efficiency.
Traditional
methods
rely
on
fixed
heuristic
rules,
which
lack
flexibility
and
adaptability
to
changing
demands.
To
address
this
issue,
paper
proposes
PCB
line
method
based
Deep
Q-Network
(DQN).
model
is
constructed
using
the
FlexSim
simulation
tool,
optimal
strategy
learned
through
DQN
algorithm.
Comparative
analysis
conducted
against
traditional
rules.
Experimental
results
indicate
that
DQN-based
achieves
substantial
improvements
For
instance
1,
approach
achieved
total
completion
time
(S)
2.521
×
105,
compared
best
rule
result
2.541
105.
Similarly,
for
2
3,
times
2.549
105
2.522
respectively,
outperforming
all
rules
evaluated.
This
study
provides
novel
intelligent
lines.
Applied Sciences,
Journal Year:
2024,
Volume and Issue:
14(12), P. 5208 - 5208
Published: June 14, 2024
One
of
the
goals
developing
and
implementing
Industry
4.0
solutions
is
to
significantly
increase
level
flexibility
autonomy
production
systems.
It
intended
provide
possibility
self-reconfiguration
systems
create
more
efficient
adaptive
manufacturing
processes.
Achieving
such
requires
comprehensive
integration
digital
technologies
with
real
processes
towards
creation
so-called
Cyber–Physical
Production
Systems
(CPPSs).
Their
architecture
based
on
physical
cybernetic
elements,
a
twin
as
central
element
“cyber”
layer.
However,
for
responses
obtained
from
cyber
layer,
allow
quick
response
changes
in
environment
system,
its
virtual
counterpart
must
be
supplemented
advanced
analytical
modules.
This
paper
proposes
method
creating
system
discrete
simulation
models
integrated
deep
reinforcement
learning
(DRL)
techniques
CPPSs.
Here,
which
agent
communicates
find
strategy
allocating
resources.
Asynchronous
Advantage
Actor–Critic
Proximal
Policy
Optimization
algorithms
were
selected
this
research.
IET Intelligent Transport Systems,
Journal Year:
2025,
Volume and Issue:
19(1)
Published: Jan. 1, 2025
ABSTRACT
Automated
guided
vehicles
(AGVs)
serve
as
pivotal
equipment
for
horizontal
transportation
in
automated
container
terminals
(ACTs),
necessitating
the
optimization
of
AGV
scheduling.
The
dynamic
nature
port
operations
introduces
uncertainties
energy
consumption,
while
battery
constraints
pose
significant
operational
challenges.
However,
limited
research
has
integrated
charging
and
discharging
behaviors
into
operations.
This
study
innovatively
proposes
an
scheduling
model
that
incorporates
a
resilient
adaptive
strategy,
adjusting
balance
between
vehicle
completion
tasks,
enabling
AGVs
to
complete
fixed
tasks
shortest
time.
Differing
from
most
existing
primarily
based
on
OR‐typed
algorithms,
this
reinforcement
learning‐based
method.
Finally,
series
numerical
experiments,
which
is
real
large‐scale
terminal
Pearl
River
Delta
(PRD)
region
Southern
China,
are
conducted
verify
effectiveness
efficiency
algorithm.
Some
beneficial
management
insights
obtained
sensitivity
analysis
practitioners.
Notably,
paramount
observation
efficacy
does
not
necessarily
correlate
positively
with
their
number.
Instead,
it
follows
“U‐shaped”
curve
trend,
indicating
optimal
range
beyond
performance
diminishes.