Computational Cost Reduction in Multi-Objective Feature Selection Using Permutational-Based Differential Evolution DOI Creative Commons
Jesús-Arnulfo Barradas-Palmeros, Efrén Mezura‐Montes, Rafael Rivera-López

et al.

Mathematical and Computational Applications, Journal Year: 2024, Volume and Issue: 29(4), P. 56 - 56

Published: July 13, 2024

Feature selection is a preprocessing step in machine learning that aims to reduce dimensionality and improve performance. The approaches for feature are often classified according the evaluation of subset features as filter, wrapper, embedded approaches. high performance wrapper associated at same time with disadvantage computational cost. Cost-reduction mechanisms have been proposed literature, where competitive achieved more efficiently. This work applies simple effective resource-saving fixed incremental sampling fraction strategies memory avoid repeated evaluations multi-objective permutational-based differential evolution selection. selected approach an extension DE-FSPM algorithm mechanism GDE3 algorithm. results showed resource savings, especially number required search process. Nonetheless, it was also detected algorithm’s diminished. Therefore, reported literature on effectiveness cost reduction single-objective were only partially sustained

Language: Английский

Improving global soil moisture prediction through cluster-averaged sampling strategy DOI Creative Commons
Qingliang Li,

Qiyun Xiao,

Cheng Zhang

et al.

Geoderma, Journal Year: 2024, Volume and Issue: 449, P. 116999 - 116999

Published: Aug. 13, 2024

Understanding and predicting global soil moisture (SM) is crucial for water resource management agricultural production. While deep learning methods (DL) have shown strong performance in SM prediction, imbalances training samples with different characteristics pose a significant challenge. We propose that improving the diversity balance of batch during gradient descent can help address this issue. To test hypothesis, we developed Cluster-Averaged Sampling (CAS) strategy utilizing unsupervised techniques. This approach involves model evenly sampled data from clusters, ensuring both sample numerical consistency within each cluster. prevents overemphasizing specific characteristics, leading to more balanced feature learning. Experiments using LandBench1.0 dataset five seeds 1-day lead-time predictions reveal CAS outperforms several Long Short-Term Memory (LSTM)-based models do not employ strategy. The median Coefficient Determination (R2) improved by 2.36 % 4.31 %, while Kling-Gupta Efficiency (KGE) 1.95 3.16 %. In high-latitude areas, R2 improvements exceeded 40 regions. further validate under realistic conditions, tested it Soil Moisture Active Passive Level 3 (SMAP-L3) satellite 1 3-day predictions, confirming its efficacy. study substantiates introduces novel method enhancing generalization DL models.

Language: Английский

Citations

2

A clustering-based archive handling method and multi-objective optimization of the optimal power flow problem DOI
Mustafa Akbel, Hamdi Tolga Kahraman, Serhat Duman

et al.

Applied Intelligence, Journal Year: 2024, Volume and Issue: 54(22), P. 11603 - 11648

Published: Aug. 29, 2024

Language: Английский

Citations

2

Fundamental Tradeoffs Between Exploration and Exploitation Search Mechanisms DOI

Abdul Hanif Abdul Halim,

Swagatam Das, Idris Ismail

et al.

Emergence, complexity and computation, Journal Year: 2024, Volume and Issue: unknown, P. 101 - 199

Published: Jan. 1, 2024

Language: Английский

Citations

2

Algorithm Initialization: Categories and Assessment DOI
Abdul Halim, Swagatam Das, Idris Ismail

et al.

Emergence, complexity and computation, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 100

Published: Jan. 1, 2024

Language: Английский

Citations

1

AdaBoost-inspired co-evolution differential evolution for reconfigurable flexible job shop scheduling considering order splitting DOI
Lixin Cheng,

Shujun Yu,

Qiuhua Tang

et al.

Journal of Manufacturing Systems, Journal Year: 2024, Volume and Issue: 77, P. 1009 - 1026

Published: Nov. 16, 2024

Language: Английский

Citations

1

An adaptive dual-strategy constrained optimization-based coevolutionary optimizer for high-dimensional feature selection DOI
Tao Li,

Shun-xi Zhang,

Qiang Yang

et al.

Computers & Electrical Engineering, Journal Year: 2024, Volume and Issue: 118, P. 109362 - 109362

Published: June 14, 2024

Language: Английский

Citations

0

Computational Cost Reduction in Multi-Objective Feature Selection Using Permutational-Based Differential Evolution DOI Creative Commons
Jesús-Arnulfo Barradas-Palmeros, Efrén Mezura‐Montes, Rafael Rivera-López

et al.

Mathematical and Computational Applications, Journal Year: 2024, Volume and Issue: 29(4), P. 56 - 56

Published: July 13, 2024

Feature selection is a preprocessing step in machine learning that aims to reduce dimensionality and improve performance. The approaches for feature are often classified according the evaluation of subset features as filter, wrapper, embedded approaches. high performance wrapper associated at same time with disadvantage computational cost. Cost-reduction mechanisms have been proposed literature, where competitive achieved more efficiently. This work applies simple effective resource-saving fixed incremental sampling fraction strategies memory avoid repeated evaluations multi-objective permutational-based differential evolution selection. selected approach an extension DE-FSPM algorithm mechanism GDE3 algorithm. results showed resource savings, especially number required search process. Nonetheless, it was also detected algorithm’s diminished. Therefore, reported literature on effectiveness cost reduction single-objective were only partially sustained

Language: Английский

Citations

0