Dynamic scheduling for multi-objective flexible job shop via deep reinforcement learning DOI
Erdong Yuan, Liejun Wang, Shiji Song

et al.

Applied Soft Computing, Journal Year: 2025, Volume and Issue: 171, P. 112787 - 112787

Published: Jan. 25, 2025

Language: Английский

HGNP: A PCA-based heterogeneous graph neural network for a family distributed flexible job shop DOI

Jiake Li,

Junqing Li, Ying Xu

et al.

Computers & Industrial Engineering, Journal Year: 2025, Volume and Issue: unknown, P. 110855 - 110855

Published: Jan. 1, 2025

Language: Английский

Citations

6

Dynamic scheduling for flexible job shop with insufficient transportation resources via graph neural network and deep reinforcement learning DOI
Min Zhang, Liang Wang,

Fusheng Qiu

et al.

Computers & Industrial Engineering, Journal Year: 2023, Volume and Issue: 186, P. 109718 - 109718

Published: Oct. 31, 2023

Language: Английский

Citations

41

Review on ensemble meta-heuristics and reinforcement learning for manufacturing scheduling problems DOI
Yaping Fu, Yifeng Wang, Kaizhou Gao

et al.

Computers & Electrical Engineering, Journal Year: 2024, Volume and Issue: 120, P. 109780 - 109780

Published: Oct. 18, 2024

Language: Английский

Citations

15

A Double Deep Q-Network framework for a flexible job shop scheduling problem with dynamic job arrivals and urgent job insertions DOI
Shaojun Lu, Yongqi Wang, Min Kong

et al.

Engineering Applications of Artificial Intelligence, Journal Year: 2024, Volume and Issue: 133, P. 108487 - 108487

Published: April 26, 2024

Language: Английский

Citations

13

Two-stage double deep Q-network algorithm considering external non-dominant set for multi-objective dynamic flexible job shop scheduling problems DOI
Lei Yue, Kai Peng, Linshan Ding

et al.

Swarm and Evolutionary Computation, Journal Year: 2024, Volume and Issue: 90, P. 101660 - 101660

Published: July 18, 2024

Language: Английский

Citations

12

Collaborative dynamic scheduling in a self-organizing manufacturing system using multi-agent reinforcement learning DOI
Yong Gui, Zequn Zhang, Dunbing Tang

et al.

Advanced Engineering Informatics, Journal Year: 2024, Volume and Issue: 62, P. 102646 - 102646

Published: June 26, 2024

Language: Английский

Citations

10

Deep reinforcement learning for machine scheduling: Methodology, the state-of-the-art, and future directions DOI

Maziyar Khadivi,

Todd Charter, Marjan Yaghoubi

et al.

Computers & Industrial Engineering, Journal Year: 2025, Volume and Issue: unknown, P. 110856 - 110856

Published: Jan. 1, 2025

Language: Английский

Citations

1

An end-to-end decentralised scheduling framework based on deep reinforcement learning for dynamic distributed heterogeneous flowshop scheduling DOI
Haoran Li, Liang Gao,

Qingsong Fan

et al.

International Journal of Production Research, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 21

Published: Jan. 23, 2025

Language: Английский

Citations

1

Exploring multi-agent reinforcement learning for unrelated parallel machine scheduling DOI

M. Zampella,

Urtzi Otamendi,

Xabier Belaunzaran

et al.

The Journal of Supercomputing, Journal Year: 2025, Volume and Issue: 81(4)

Published: Feb. 19, 2025

Language: Английский

Citations

1

An efficient and adaptive design of reinforcement learning environment to solve job shop scheduling problem with soft actor-critic algorithm DOI

Jinghua Si,

Xinyu Li, Liang Gao

et al.

International Journal of Production Research, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 16

Published: March 30, 2024

Shop scheduling is deeply involved in manufacturing. In order to improve the efficiency of and fit dynamic scenarios, many Deep Reinforcement Learning (DRL) methods are studied solve problems like job shop flow shop. But most studies focus on using latest algorithms while ignoring that environment plays an important role agent learning. this paper, we design effective, robust size-agnostic for scheduling. The proposed uses centralised training decentralised execution (CTDE) implement a multi-agent architecture. Together with observation space design, environmental information irrelevant current decision eliminated as much possible. action enlarges agents, which performs better than traditional way. Finally, Soft Actor-Critic (SAC) algorithm adapted learning within environment. By comparing rules, other reinforcement algorithms, relevant literature, superiority results obtained study demonstrated.

Language: Английский

Citations

8