
Aerospace, Journal Year: 2025, Volume and Issue: 12(3), P. 223 - 223
Published: March 9, 2025
Artificial intelligence (AI) has demonstrated success across various industries; however, its adoption in aviation remains limited due to concerns regarding the interpretability of AI models, which often function as black box systems with opaque decision-making processes. Given safety-critical nature aviation, lack transparency AI-generated predictions poses significant challenges for industry stakeholders. This study investigates classification performance multiple supervised machine learning models and employs SHapley Additive exPlanations (SHAPs) provide global model explanations, identifying key features that influence decision boundaries. To address issue class imbalance Australian Transport Safety Bureau (ATSB) dataset, a Variational Autoencoder (VAE) is also employed data augmentation. A comparative evaluation four algorithms conducted three-class task:—Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), deep neural network (DNN) comprising five hidden layers. The results demonstrate competitive accuracy, precision, recall, F1-score metrics, highlighting effectiveness explainable techniques enhancing fostering trust AI-driven safety applications.
Language: Английский