Applied Soft Computing, Journal Year: 2025, Volume and Issue: 171, P. 112787 - 112787
Published: Jan. 25, 2025
Language: Английский
Applied Soft Computing, Journal Year: 2025, Volume and Issue: 171, P. 112787 - 112787
Published: Jan. 25, 2025
Language: Английский
Computers & Industrial Engineering, Journal Year: 2025, Volume and Issue: unknown, P. 110855 - 110855
Published: Jan. 1, 2025
Language: Английский
Citations
6Computers & Industrial Engineering, Journal Year: 2023, Volume and Issue: 186, P. 109718 - 109718
Published: Oct. 31, 2023
Language: Английский
Citations
41Computers & Electrical Engineering, Journal Year: 2024, Volume and Issue: 120, P. 109780 - 109780
Published: Oct. 18, 2024
Language: Английский
Citations
15Engineering Applications of Artificial Intelligence, Journal Year: 2024, Volume and Issue: 133, P. 108487 - 108487
Published: April 26, 2024
Language: Английский
Citations
13Swarm and Evolutionary Computation, Journal Year: 2024, Volume and Issue: 90, P. 101660 - 101660
Published: July 18, 2024
Language: Английский
Citations
12Advanced Engineering Informatics, Journal Year: 2024, Volume and Issue: 62, P. 102646 - 102646
Published: June 26, 2024
Language: Английский
Citations
10Computers & Industrial Engineering, Journal Year: 2025, Volume and Issue: unknown, P. 110856 - 110856
Published: Jan. 1, 2025
Language: Английский
Citations
1International Journal of Production Research, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 21
Published: Jan. 23, 2025
Language: Английский
Citations
1The Journal of Supercomputing, Journal Year: 2025, Volume and Issue: 81(4)
Published: Feb. 19, 2025
Language: Английский
Citations
1International Journal of Production Research, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 16
Published: March 30, 2024
Shop scheduling is deeply involved in manufacturing. In order to improve the efficiency of and fit dynamic scenarios, many Deep Reinforcement Learning (DRL) methods are studied solve problems like job shop flow shop. But most studies focus on using latest algorithms while ignoring that environment plays an important role agent learning. this paper, we design effective, robust size-agnostic for scheduling. The proposed uses centralised training decentralised execution (CTDE) implement a multi-agent architecture. Together with observation space design, environmental information irrelevant current decision eliminated as much possible. action enlarges agents, which performs better than traditional way. Finally, Soft Actor-Critic (SAC) algorithm adapted learning within environment. By comparing rules, other reinforcement algorithms, relevant literature, superiority results obtained study demonstrated.
Language: Английский
Citations
8