Comparison Of K-nearest Neighbor (KNN) And Linear Discriminant Analysis (LDA) Algorithms For Mature Ajwa Date Fruit Classification DOI Open Access

Risna Risna,

Fadila Amanda,

Shofwatul Uyun

et al.

International Conference on Information Science and Technology Innovation (ICoSTEC), Journal Year: 2023, Volume and Issue: 2(1), P. 11 - 16

Published: March 5, 2023

Currently, many applications of artificial intelligence in various fields life, especially image data, require digital processing. One example the use images often encountered is processing fruit ripeness. Dates are a great demand by people Indonesia, and one most popular dates Ajwa date. The author interested developing previous research regarding identifying ripeness Dates, where used RGB color with HIS method. Therefore, authors want to apply different method, namely K-Nearest Neighbor (K-NN) method Linear Discriminant Analysis (LDA), classifying applying statistical feature algorithm. This aims develop classification model for maturity level Dates. Furthermore, it expected provide better results than test using KNN can produce higher accuracy LDA, obtained from calculation Euclidean distance k = 1 100% Manhattan value 2 worth 100%, but minimum 53.33 % found at 9 calculation, while LDA reach 93.33%.

Language: Английский

Elephants and algorithms: a review of the current and future role of AI in elephant monitoring DOI Creative Commons
Leandra Brickson,

Libby Zhang,

Fritz Vollrath

et al.

Journal of The Royal Society Interface, Journal Year: 2023, Volume and Issue: 20(208)

Published: Nov. 1, 2023

Artificial intelligence (AI) and machine learning (ML) present revolutionary opportunities to enhance our understanding of animal behaviour conservation strategies. Using elephants, a crucial species in Africa Asia’s protected areas, as focal point, we delve into the role AI ML their conservation. Given increasing amounts data gathered from variety sensors like cameras, microphones, geophones, drones satellites, challenge lies managing interpreting this vast data. New techniques offer solutions streamline process, helping us extract vital information that might otherwise be overlooked. This paper focuses on different AI-driven monitoring methods potential for improving elephant Collaborative efforts between experts ecological researchers are essential leveraging these innovative technologies enhanced wildlife conservation, setting precedent numerous other species.

Language: Английский

Citations

19

WhisPrompt: Audio classification of Chinese opera genres by transferring time-series features DOI
Bin Shi, Hao Wang, Jingwen Qiu

et al.

Expert Systems with Applications, Journal Year: 2025, Volume and Issue: unknown, P. 127301 - 127301

Published: March 1, 2025

Language: Английский

Citations

0

Music genre classification with parallel convolutional neural networks and capuchin search algorithm DOI Creative Commons
Yuxin Zhang, Teng Li

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: March 20, 2025

With the primary objective of creating playlists that suggest songs, interest in music genre categorization has grown thanks to high-tech multimedia tools. To develop a strong classifier can quickly classify unlabeled and enhance consumers' experiences with media players files, machine learning deep ideas are required. This study presents unique method blends convolutional neural network (CNN) models as an ensemble system detect musical genres. The makes use discrete wavelet transform (DWT), mel frequency cepstral coefficients (MFCC), short-time fourier (STFT) characteristics provide comprehensive framework for expressing stylistic qualities music. do this, each model's hyperparameters generated using capuchin search algorithm (CapSA). Preprocessing original signals, feature description utilizing DWT, MFCC, STFT signal matrices, CNN model optimization extract features, identification based on combined features make up four main components technique. By integrating many processing techniques models, this advances field classification provides possible insights into blending diverse improved accuracy. GTZAN Extended-Ballroom datasets were two used studies. average accuracy 96.07 96.20 database, respectively, show how well our suggested strategy performs when compared earlier, comparable methods.

Language: Английский

Citations

0

Infant cry classification using an efficient graph structure and attention-based model DOI Creative Commons

Xuesong Qiao,

Siwen Jiao,

Han Li

et al.

Kuwait Journal of Science, Journal Year: 2024, Volume and Issue: 51(3), P. 100221 - 100221

Published: March 26, 2024

Crying serves as the primary means through which infants communicate, presenting a significant challenge for new parents in understanding its underlying causes. This study aims to classify infant cries ascertain reasons behind their distress. In this paper, an efficient graph structure based on multi-dimensional hybrid features is proposed. Firstly, are processed extract various speech features, such spectrogram, mel-scaled MFCC, and others. These then combined across multiple dimensions better utilize information cries. Additionally, order structure, local-to-global convolutional neural network (AlgNet) networks attention mechanisms The experimental results demonstrate that use of improved accuracy by average 8.01% compared using standalone AlgNet model achieved improvement 5.62% traditional deep learning models. Experiments were conducted Dunstan baby language, Donate cry, cry datasets with rates 87.78%, 93.83%, 93.14% respectively.

Language: Английский

Citations

3

Windy events detection in big bioacoustics datasets using a pre-trained Convolutional Neural Network DOI Creative Commons
Francesca Terranova, Lorenzo Betti, Valeria Ferrario

et al.

The Science of The Total Environment, Journal Year: 2024, Volume and Issue: 949, P. 174868 - 174868

Published: July 20, 2024

Passive Acoustic Monitoring (PAM), which involves using autonomous record units for studying wildlife behaviour and distribution, often requires handling big acoustic datasets collected over extended periods. While these data offer invaluable insights about wildlife, their analysis can present challenges in dealing with geophonic sources. A major issue the process of detection target sounds is represented by wind-induced noise. This lead to false positive detections, i.e., energy peaks due wind gusts misclassified as biological sounds, or negative, noise masks presence sounds. dominated makes vocal activity unreliable, thus compromising and, subsequently, interpretation results. Our work introduces a straightforward approach detecting recordings affected windy events pre-trained convolutional neural network. facilitates identifying wind-compromised data. We consider this dataset pre-processing crucial ensuring reliable use PAM implemented preprocessing leveraging YAMNet, deep learning model sound classification tasks. evaluated YAMNet as-is ability detect tested its performance Transfer Learning scenario our annotated from Stony Point Penguin Colony South Africa. achieved precision 0.71, recall 0.66, those metrics strongly improved after training on dataset, reaching 0.91, 0.92, corresponding relative increment >28 %. study demonstrates promising application bioacoustics ecoacoustics fields, addressing need wind-noise-free released an open-access code that, combined efficiency peak be used standard laptops broad user base.

Language: Английский

Citations

1

Emotion Classification Algorithm for Audiovisual Scenes Based on Low-Frequency Signals DOI Creative Commons

Peiyuan Jin,

Zhiwei Si,

Haibin Wan

et al.

Applied Sciences, Journal Year: 2023, Volume and Issue: 13(12), P. 7122 - 7122

Published: June 14, 2023

Since informatization and digitization came into life, audio signal emotion classification has been widely studied discussed as a hot issue in many application fields. With the continuous development of artificial intelligence, addition to speech music technology, which is used production its also becoming more abundant. Current research on audiovisual scene mainly focuses frame-by-frame processing video images achieve discrimination classification. However, those methods have problems algorithms with high complexity computing cost, making it difficult meet engineering needs real-time online automatic Therefore, this paper proposes an algorithm for detection effective movie shock scenes that can be applications by exploring law low-frequency sound effects perception known emotions, based database clips 5.1 format, extracting feature parameters performing dichotomous other types emotions. As LFS enhance sense shock, monaural detecting emotional impact using subwoofer (SW) proposed, trained model SW features achieved maximum accuracy 87% test set convolutional neural network (CNN) model. To expand scope above algorithm, low-pass filtering (with cutoff frequency 120 Hz) 91.5% CNN

Language: Английский

Citations

3

Knowing a fellow by their bellow: acoustic individuality in the bellows of the American alligator DOI Creative Commons
Thomas Rejsenhus Jensen, Andrey Anikin, Mathias Osvath

et al.

Animal Behaviour, Journal Year: 2023, Volume and Issue: 207, P. 157 - 167

Published: Dec. 2, 2023

Identity cues in animal calls are essential for conspecific vocal individual recognition. Some acoustically active species mainly show reliable identity their vocalizations because of variation anatomy and life history. Long strenuous-to-produce may be particularly effective showing sustaining such reveal anatomical differences sound production. It is largely unknown whether reptiles possess acoustic individuality despite some groups being vocal. We analysed 814 bellows from 47 American alligators, Alligator mississippiensis, extracting spectral characteristics manually corrected contours the fundamental frequency. Recognition was up to 66% correct with a supervised classifier (random forest) 61% unsupervised clustering (chance = 2.1%), indicating that alligators have highly distinct bellows. Alligators were distinguished primarily based on call spectrum, frequency contour amplitude modulation, which also provided information about animal's size. Neither manual supervision analyses nor training labelled data necessary achieve reasonable accuracy, has promising potential identification individuals via passive monitoring research conservation purposes. Additionally, our results highlight importance studying utilization social lives crocodylians.

Language: Английский

Citations

3

Using autonomous recording units for vocal individuality: insights from Barred Owl identification DOI Creative Commons

S. Tseng,

Dexter P. Hodder, Ken A. Otter

et al.

Avian Conservation and Ecology, Journal Year: 2024, Volume and Issue: 19(1)

Published: Jan. 1, 2024

Recent advances in acoustic recording equipment enable autonomous monitoring with extended spatial and temporal scales, which may allow for the censusing of species individually distinct vocalizations, such as owls. We assessed potential identifying individual Barred Owls (Strix varia) through detections their vocalizations using passive monitoring. placed units throughout John Prince Research Forest (54°27' N, 124°10' W, 700 m ASL) surrounding area, northern British Columbia, Canada, from February to April 2021. The study area was 357 km2 a minimum 2 km between 66 stations. During this period, we collected 454 Owl calls, specifically two-phrase hoot, 10 stations, were sufficient quality spectrographic analysis. From each call, measured 30 features: 12 18 frequency features. Using forward stepwise discriminant function analysis, model correctly categorized 83.2% calls true location based on 5-fold cross validation. showed substantial agreement station that call classified originate from, where actually recorded. most important features enabled discrimination length, interval 4th 5th note, 6th 7th duration 8th note. Our results suggest can be used not only detect presence/absence but also, have features, population censusing.

Language: Английский

Citations

0

Investigating hunting in a protected area in Southeast Asia using passive acoustic monitoring with mobile smartphones and deep learning DOI Creative Commons
Thinh Tien Vu,

Dai Viet Phan,

Thai Son Le

et al.

Ecological Indicators, Journal Year: 2024, Volume and Issue: 167, P. 112501 - 112501

Published: Oct. 1, 2024

Language: Английский

Citations

0

Ensemble deep learning and anomaly detection framework for automatic audio classification: Insights into deer vocalizations DOI Creative Commons
Salem Ibrahim Salem, Susumu Shirayama,

Satoshi Shimazaki

et al.

Ecological Informatics, Journal Year: 2024, Volume and Issue: 84, P. 102883 - 102883

Published: Nov. 8, 2024

Language: Английский

Citations

0