An Adaptive Linear Programming Algorithm with Parameter Learning DOI Creative Commons
Lin Guo, Anand Balu Nellippallil, Warren Smith

и другие.

Algorithms, Год журнала: 2024, Номер 17(2), С. 88 - 88

Опубликована: Фев. 19, 2024

When dealing with engineering design problems, designers often encounter nonlinear and nonconvex features, multiple objectives, coupled decision making, various levels of fidelity sub-systems. To realize the limited computational resources, problems features above need to be linearized then solved using solution algorithms for linear programming. The adaptive programming (ALP) algorithm is an extension Sequential Linear Programming where a compromise support problem (cDSP) iteratively linearized, resulting satisficing solutions returned. reduced move coefficient (RMC) used define how far away from boundary next linearization performed, currently, it determined based on heuristic. choice RMC significantly affects efficacy process and, hence, rapidity finding solution. In this paper, we propose rule-based parameter-learning procedure vary at each iteration, thereby increasing speed determining ultimate demonstrate ALP parameter learning (ALPPL), use industry-inspired problem, namely, integrated hot-rolling chain production steel rod. Using proposed ALPPL, can incorporate domain expertise identify most relevant criteria evaluate performance algorithm, quantify as evaluation indices, tune return that fall into desired range index. Compared old golden section search update RMC, ALPPL improves by identifying values better without adding complexity. insensitive region explored ALPPL—the only explores twice, whereas four times throughout iterations. With have more comprehensive definition performance—given scenarios, indices (EIs) including statistics deviations, numbers binding (active) constraints bounds, accumulated constraints, number (DEI) also learned during value brings EIs DEI returned best which ensures balance between accuracy robustness solutions. For our test chain, returns in twelve iterations considering deviation index, fourteen EIs. complexity both O(n2). steps customized improve determination other algorithms.

Язык: Английский

Assessment of hydrogen vehicle fuel economy using MRAC based on deep learning DOI Creative Commons
Jaesu Han, Yi Sun, Sangseok Yu

и другие.

Scientific Reports, Год журнала: 2025, Номер 15(1)

Опубликована: Апрель 16, 2025

Язык: Английский

Процитировано

0

An Adaptive Linear Programming Algorithm with Parameter Learning DOI Creative Commons
Lin Guo, Anand Balu Nellippallil, Warren Smith

и другие.

Algorithms, Год журнала: 2024, Номер 17(2), С. 88 - 88

Опубликована: Фев. 19, 2024

When dealing with engineering design problems, designers often encounter nonlinear and nonconvex features, multiple objectives, coupled decision making, various levels of fidelity sub-systems. To realize the limited computational resources, problems features above need to be linearized then solved using solution algorithms for linear programming. The adaptive programming (ALP) algorithm is an extension Sequential Linear Programming where a compromise support problem (cDSP) iteratively linearized, resulting satisficing solutions returned. reduced move coefficient (RMC) used define how far away from boundary next linearization performed, currently, it determined based on heuristic. choice RMC significantly affects efficacy process and, hence, rapidity finding solution. In this paper, we propose rule-based parameter-learning procedure vary at each iteration, thereby increasing speed determining ultimate demonstrate ALP parameter learning (ALPPL), use industry-inspired problem, namely, integrated hot-rolling chain production steel rod. Using proposed ALPPL, can incorporate domain expertise identify most relevant criteria evaluate performance algorithm, quantify as evaluation indices, tune return that fall into desired range index. Compared old golden section search update RMC, ALPPL improves by identifying values better without adding complexity. insensitive region explored ALPPL—the only explores twice, whereas four times throughout iterations. With have more comprehensive definition performance—given scenarios, indices (EIs) including statistics deviations, numbers binding (active) constraints bounds, accumulated constraints, number (DEI) also learned during value brings EIs DEI returned best which ensures balance between accuracy robustness solutions. For our test chain, returns in twelve iterations considering deviation index, fourteen EIs. complexity both O(n2). steps customized improve determination other algorithms.

Язык: Английский

Процитировано

0