Dynamic scheduling for multi-objective flexible job shop via deep reinforcement learning DOI
Erdong Yuan, Liejun Wang, Shiji Song

и другие.

Applied Soft Computing, Год журнала: 2025, Номер 171, С. 112787 - 112787

Опубликована: Янв. 25, 2025

Язык: Английский

HGNP: A PCA-based heterogeneous graph neural network for a family distributed flexible job shop DOI

Jiake Li,

Junqing Li, Ying Xu

и другие.

Computers & Industrial Engineering, Год журнала: 2025, Номер unknown, С. 110855 - 110855

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

6

Dynamic scheduling for flexible job shop with insufficient transportation resources via graph neural network and deep reinforcement learning DOI
Min Zhang, Liang Wang,

Fusheng Qiu

и другие.

Computers & Industrial Engineering, Год журнала: 2023, Номер 186, С. 109718 - 109718

Опубликована: Окт. 31, 2023

Язык: Английский

Процитировано

41

Review on ensemble meta-heuristics and reinforcement learning for manufacturing scheduling problems DOI
Yaping Fu, Yifeng Wang, Kaizhou Gao

и другие.

Computers & Electrical Engineering, Год журнала: 2024, Номер 120, С. 109780 - 109780

Опубликована: Окт. 18, 2024

Язык: Английский

Процитировано

15

A Double Deep Q-Network framework for a flexible job shop scheduling problem with dynamic job arrivals and urgent job insertions DOI
Shaojun Lu, Yongqi Wang, Min Kong

и другие.

Engineering Applications of Artificial Intelligence, Год журнала: 2024, Номер 133, С. 108487 - 108487

Опубликована: Апрель 26, 2024

Язык: Английский

Процитировано

13

Two-stage double deep Q-network algorithm considering external non-dominant set for multi-objective dynamic flexible job shop scheduling problems DOI
Lei Yue, Kai Peng, Linshan Ding

и другие.

Swarm and Evolutionary Computation, Год журнала: 2024, Номер 90, С. 101660 - 101660

Опубликована: Июль 18, 2024

Язык: Английский

Процитировано

12

Collaborative dynamic scheduling in a self-organizing manufacturing system using multi-agent reinforcement learning DOI
Yong Gui, Zequn Zhang, Dunbing Tang

и другие.

Advanced Engineering Informatics, Год журнала: 2024, Номер 62, С. 102646 - 102646

Опубликована: Июнь 26, 2024

Язык: Английский

Процитировано

10

Deep reinforcement learning for machine scheduling: Methodology, the state-of-the-art, and future directions DOI

Maziyar Khadivi,

Todd Charter, Marjan Yaghoubi

и другие.

Computers & Industrial Engineering, Год журнала: 2025, Номер unknown, С. 110856 - 110856

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

1

An end-to-end decentralised scheduling framework based on deep reinforcement learning for dynamic distributed heterogeneous flowshop scheduling DOI
Haoran Li, Liang Gao,

Qingsong Fan

и другие.

International Journal of Production Research, Год журнала: 2025, Номер unknown, С. 1 - 21

Опубликована: Янв. 23, 2025

Язык: Английский

Процитировано

1

Exploring multi-agent reinforcement learning for unrelated parallel machine scheduling DOI

M. Zampella,

Urtzi Otamendi,

Xabier Belaunzaran

и другие.

The Journal of Supercomputing, Год журнала: 2025, Номер 81(4)

Опубликована: Фев. 19, 2025

Язык: Английский

Процитировано

1

An efficient and adaptive design of reinforcement learning environment to solve job shop scheduling problem with soft actor-critic algorithm DOI

Jinghua Si,

Xinyu Li, Liang Gao

и другие.

International Journal of Production Research, Год журнала: 2024, Номер unknown, С. 1 - 16

Опубликована: Март 30, 2024

Shop scheduling is deeply involved in manufacturing. In order to improve the efficiency of and fit dynamic scenarios, many Deep Reinforcement Learning (DRL) methods are studied solve problems like job shop flow shop. But most studies focus on using latest algorithms while ignoring that environment plays an important role agent learning. this paper, we design effective, robust size-agnostic for scheduling. The proposed uses centralised training decentralised execution (CTDE) implement a multi-agent architecture. Together with observation space design, environmental information irrelevant current decision eliminated as much possible. action enlarges agents, which performs better than traditional way. Finally, Soft Actor-Critic (SAC) algorithm adapted learning within environment. By comparing rules, other reinforcement algorithms, relevant literature, superiority results obtained study demonstrated.

Язык: Английский

Процитировано

8