Explaining Optimisation of Offshore Wind Farms Using Metaheuristics DOI
Mathew J. Walter, Pawel L. Manikowski, Matthew J. Craven

et al.

Published: Jan. 1, 2024

Explainable artificial intelligence (XAI) has become a significant approach for increasing trust in techniques used by the machine learning community. Similarly, given importance of applications metaheuristics, often optimisation critical national infrastructure such as power generation facilities, it is important that trustworthiness tools optimising these problems assured, and use XAI within metaheuristic domain one way achieving this. This chapter considers application tool previously demonstrated on knapsack to offshore wind farm layouts discusses extent which able explain processes identify optimal designs.

Language: Английский

A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends DOI
Ying Bi, Bing Xue, Pablo Mesejo

et al.

IEEE Transactions on Evolutionary Computation, Journal Year: 2022, Volume and Issue: 27(1), P. 5 - 25

Published: Nov. 9, 2022

Computer vision (CV) is a big and important field in artificial intelligence covering wide range of applications. Image analysis major task CV aiming to extract, analyze understand the visual content images. However, image-related tasks are very challenging due many factors, e.g., high variations across images, dimensionality, domain expertise requirement, image distortions. Evolutionary computation (EC) approaches have been widely used for with significant achievement. there no comprehensive survey existing EC analysis. To fill this gap, article provides all essential tasks, including edge detection, segmentation, feature analysis, classification, object others. This aims provide better understanding evolutionary (ECV) by discussing contributions different exploring how why The applications, challenges, issues, trends associated research also discussed summarized further guidelines opportunities future research.

Language: Английский

Citations

53

Human attention-guided explainable artificial intelligence for computer vision models DOI Creative Commons
Guoyang Liu, Jindi Zhang, Antoni B. Chan

et al.

Neural Networks, Journal Year: 2024, Volume and Issue: 177, P. 106392 - 106392

Published: May 15, 2024

Explainable artificial intelligence (XAI) has been increasingly investigated to enhance the transparency of black-box models, promoting better user understanding and trust. Developing an XAI that is faithful models plausible users both a necessity challenge. This work examines whether embedding human attention knowledge into saliency-based methods for computer vision could their plausibility faithfulness. Two novel object detection namely FullGrad-CAM FullGrad-CAM++, were first developed generate object-specific explanations by extending current gradient-based image classification models. Using as objective measure, these achieve higher explanation plausibility. Interestingly, all when applied generally produce saliency maps are less model than from same task. Accordingly, attention-guided (HAG-XAI) was proposed learn how best combine explanatory information using trainable activation functions smoothing kernels maximize similarity between map map. The evaluated on widely used BDD-100K, MS-COCO, ImageNet datasets compared with typical perturbation-based methods. Results suggest HAG-XAI enhanced trust at expense faithfulness it plausibility, faithfulness, simultaneously outperformed existing state-of-the-art

Language: Английский

Citations

7

An Analysis of the Ingredients for Learning Interpretable Symbolic Regression Models with Human-in-the-loop and Genetic Programming DOI Open Access
Giorgia Nadizar, Luigi Rovito, Andrea De Lorenzo

et al.

ACM Transactions on Evolutionary Learning and Optimization, Journal Year: 2024, Volume and Issue: 4(1), P. 1 - 30

Published: Jan. 30, 2024

Interpretability is a critical aspect to ensure fair and responsible use of machine learning (ML) in high-stakes applications. Genetic programming (GP) has been used obtain interpretable ML models because it operates at the level functional building blocks: if these blocks are interpretable, there chance that their composition (i.e., entire model) also interpretable. However, degree which model depends on observer. Motivated by this, we study recently-introduced human-in-the-loop system allows user steer GP’s generation process preferences, shall be online-learned an artificial neural network (ANN). We focus as analytical functions symbolic regression) this key problem ML, propose two-fold contribution. First, devise more general representations for ANN learn upon, enable application wider range problems. Second, delve into deeper analysis system’s components. To end, incremental experimental evaluation, aimed (1) studying effectiveness can capture perceived interpretability simulated users, (2) investigating how outcome affected across different feedback profiles, (3) determining whether humans participants would prefer were generated with or without involvement. Our results pose clarity pros cons using approach discover GP.

Language: Английский

Citations

6

GP for Continuous Control: Teacher or Learner? The Case of Simulated Modular Soft Robots DOI
Eric Medvet, Giorgia Nadizar

Genetic and evolutionary computation, Journal Year: 2024, Volume and Issue: unknown, P. 203 - 224

Published: Jan. 1, 2024

Language: Английский

Citations

5

A population-based approach for multi-agent interpretable reinforcement learning DOI

Marco Crespi,

Andrea Ferigo, Leonardo Lucio Custode

et al.

Applied Soft Computing, Journal Year: 2023, Volume and Issue: 147, P. 110758 - 110758

Published: Aug. 18, 2023

Language: Английский

Citations

10

Evolutionary Approaches to Explainable Machine Learning DOI
Ryan Zhou, Ting Hu

Genetic and evolutionary computation, Journal Year: 2023, Volume and Issue: unknown, P. 487 - 506

Published: Nov. 1, 2023

Language: Английский

Citations

10

How to Measure Explainability and Interpretability of Machine Learning Results DOI

Elisabeth Mayrhuber,

Bogdan Burlacu, Stephan Winkler

et al.

Genetic and evolutionary computation, Journal Year: 2025, Volume and Issue: unknown, P. 357 - 374

Published: Jan. 1, 2025

Language: Английский

Citations

0

A Comparison of Clustering Approaches for Metaheuristic Behaviour Data DOI
Helena Stegherr, Michael Heider, Jörg Hähner

et al.

Studies in computational intelligence, Journal Year: 2025, Volume and Issue: unknown, P. 305 - 322

Published: Jan. 1, 2025

Language: Английский

Citations

0

A Coach-Based Quality-Diversity Approach for Multi-agent Interpretable Reinforcement Learning DOI

Erik Nielsen,

Andrea Ferigo, Giovanni Iacca

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 402 - 418

Published: Jan. 1, 2025

Language: Английский

Citations

0

Evolutionary Computation for Explainable Deep Learning DOI
Ryan Zhou,

Ting Hu

Natural computing series, Journal Year: 2025, Volume and Issue: unknown, P. 67 - 92

Published: Jan. 1, 2025

Language: Английский

Citations

0