Discovering the hidden personality of lambs: Harnessing the power of Deep Convolutional Neural Networks (DCNNs) to predict temperament from facial images DOI
Cihan Çakmakçı, Danielle Rodrigues Magalhães, Vitor Ramos Pacor

et al.

Applied Animal Behaviour Science, Journal Year: 2023, Volume and Issue: 267, P. 106060 - 106060

Published: Sept. 14, 2023

Language: Английский

The quest to develop automated systems for monitoring animal behavior DOI Creative Commons
Janice M. Siegford, Juan P. Steibel, Junjie Han

et al.

Applied Animal Behaviour Science, Journal Year: 2023, Volume and Issue: 265, P. 106000 - 106000

Published: July 17, 2023

Automated behavior analysis (ABA) strategies are being researched at a rapid rate to detect an array of behaviors across range species. There is growing optimism that soon ethologists will not have manually decode hours (and hours) animal videos, but instead computers process them for us. However, before we assume ABA ready practical use, it important take realistic look exactly what developed, the expertise used develop it, and context in which these studies occur. Once understand common pitfalls occurring during development identify limitations, can construct robust tools achieve automated (ultimately even continuous real time) behavioral data, allowing more detailed or longer-term on larger numbers animals than ever before. only as good trained be. A key starting point having annotated data model training assessment. most developers ethology. Often no formal ethogram developed descriptions target publications limited inaccurate. In addition, also frequently using small datasets, lack sufficient variability morphometrics, activities, camera viewpoints, environmental features be generalizable. Thus, often needs further validated satisfactorily different populations under other conditions, research purposes. Multidisciplinary teams researchers including ethicists well computer scientists, engineers needed help address problems when applying vision measure behavior. Reference datasets detection should generated shared include image annotations, baseline analyses benchmarking. Also critical standards creating such reference best practices methods validating results from ensure they At present, handful publicly available exist tools. As work realize promise subsequent precision livestock farming technologies) behavior, clear understanding practices, access accurately networking among increase our chances successes.

Language: Английский

Citations

26

Automated Detection of Cat Facial Landmarks DOI Creative Commons
George Martvel, Ilan Shimshoni, Anna Zamansky

et al.

International Journal of Computer Vision, Journal Year: 2024, Volume and Issue: 132(8), P. 3103 - 3118

Published: March 5, 2024

Language: Английский

Citations

14

Deep learning in multiple animal tracking: A survey DOI
Yeqiang Liu, Weiran Li,

Xue Liu

et al.

Computers and Electronics in Agriculture, Journal Year: 2024, Volume and Issue: 224, P. 109161 - 109161

Published: June 25, 2024

Language: Английский

Citations

9

Explainable automated pain recognition in cats DOI Creative Commons
Marcelo Feighelstein, Lea Henze, Sebastian Meller

et al.

Scientific Reports, Journal Year: 2023, Volume and Issue: 13(1)

Published: June 2, 2023

Abstract Manual tools for pain assessment from facial expressions have been suggested and validated several animal species. However, expression analysis performed by humans is prone to subjectivity bias, in many cases also requires special expertise training. This has led an increasing body of work on automated recognition, which addressed species, including cats. Even experts, cats are a notoriously challenging species assessment. A previous study compared two approaches ‘pain’/‘no pain’ classification cat images: deep learning approach, approach based manually annotated geometric landmarks, reaching comparable accuracy results. the included very homogeneous dataset thus further research generalizability recognition more realistic settings required. addresses question whether AI models can classify (multi-breed, multi-sex) setting using heterogeneous potentially ‘noisy’ 84 client-owned Cats were convenience sample presented Department Small Animal Medicine Surgery University Veterinary Hannover individuals different breeds, ages, sex, with varying medical conditions/medical histories. scored veterinary experts Glasgow composite measure scale combination well-documented comprehensive clinical history those patients; scoring was then used training approaches. We show that this context landmark-based performs better, above 77% detection as opposed only 65% reached approach. Furthermore, we investigated explainability such machine terms identifying features important machine, revealing region nose mouth seems classification, while ears less important, these findings being consistent across techniques studied here.

Language: Английский

Citations

21

Explainable automated recognition of emotional states from canine facial expressions: the case of positive anticipation and frustration DOI Creative Commons

Tali Boneh-Shitrit,

Marcelo Feighelstein, Annika Bremhorst

et al.

Scientific Reports, Journal Year: 2022, Volume and Issue: 12(1)

Published: Dec. 30, 2022

Abstract In animal research, automation of affective states recognition has so far mainly addressed pain in a few species. Emotional remain uncharted territories, especially dogs, due to the complexity their facial morphology and expressions. This study contributes fill this gap two aspects. First, it is first address dog emotional using dataset obtained controlled experimental setting, including videos from (n = 29) Labrador Retrievers assumed be experimentally induced states: negative (frustration) positive (anticipation). The dogs’ expressions were measured Dogs Facial Action Coding System (DogFACS). Two different approaches are compared relation our aim: (1) DogFACS-based approach with two-step pipeline consisting (i) DogFACS variable detector (ii) positive/negative state Decision Tree classifier; (2) An deep learning techniques no intermediate representation. reach accuracy above 71% 89%, respectively, performing better. Secondly, also explainability AI models context emotion animals. provides decision trees, that mathematical representation which reflects previous findings by human experts certain (DogFACS variables) being correlates specific states. offers different, visual form heatmaps reflecting regions focus network’s attention, some cases show clearly related nature particular variables. These may hold key novel insights on sensitivity network nuanced pixel patterns information invisible eye.

Language: Английский

Citations

28

Automated recognition of emotional states of horses from facial expressions DOI Creative Commons
Marcelo Feighelstein,

Claire Riccie-Bonot,

Hana Hasan

et al.

PLoS ONE, Journal Year: 2024, Volume and Issue: 19(7), P. e0302893 - e0302893

Published: July 15, 2024

Animal affective computing is an emerging new field, which has so far mainly focused on pain, while other emotional states remain uncharted territories, especially in horses. This study the first to develop AI models automatically recognize horse from facial expressions using data collected a controlled experiment. We explore two types of pipelines: deep learning one takes as input video footage, and machine EquiFACS annotations. The former outperforms latter, with 76% accuracy separating between four states: baseline, positive anticipation, disappointment frustration. Anticipation frustration were difficult separate, only 61% accuracy.

Language: Английский

Citations

4

From facial expressions to algorithms: a narrative review of animal pain recognition technologies DOI Creative Commons
Ludovica Chiavaccini, Anjali Gupta,

Guido Chiavaccini

et al.

Frontiers in Veterinary Science, Journal Year: 2024, Volume and Issue: 11

Published: July 17, 2024

Facial expressions are essential for communication and emotional expression across species. Despite the improvements brought by tools like Horse Grimace Scale (HGS) in pain recognition horses, their reliance on human identification of characteristic traits presents drawbacks such as subjectivity, training requirements, costs, potential bias. these challenges, development facial scales animals has been making strides. To address limitations, Automated Pain Recognition (APR) powered Artificial Intelligence (AI) offers a promising advancement. Notably, computer vision machine learning have revolutionized our approach to identifying addressing non-verbal patients, including animals, with profound implications both veterinary medicine animal welfare. By leveraging capabilities AI algorithms, we can construct sophisticated models capable analyzing diverse data inputs, encompassing not only but also body language, vocalizations, physiological signals, provide precise objective evaluations an animal's levels. While advancement APR holds great promise improving welfare enabling better management, it brings forth need overcome ensure ethical practices, develop robust ground truth measures. This narrative review aimed comprehensive overview, tracing journey from initial application recent application, evolution, limitations APR, thereby contributing understanding this rapidly evolving field.

Language: Английский

Citations

4

Comparison between AI and human expert performance in acute pain assessment in sheep DOI Creative Commons
Marcelo Feighelstein, Stélio Pacca Loureiro Luna,

Nuno Silva

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 3, 2025

Language: Английский

Citations

0

Identifying Novel Emotions and Wellbeing of Horses from Videos Through Unsupervised Learning DOI Creative Commons

Aarya Bhave,

Emily Kieson, Alina Hafner

et al.

Sensors, Journal Year: 2025, Volume and Issue: 25(3), P. 859 - 859

Published: Jan. 31, 2025

This research applies unsupervised learning on a large original dataset of horses in the wild to identify previously unidentified horse emotions. We construct novel, high-quality, diverse 3929 images consisting five breeds worldwide at different geographical locations. base our analysis seven Panksepp emotions mammals "Exploring", "Sadness", "Playing", "Rage", "Fear", "Affectionate" and "Lust", along with one additional emotion "Pain" which has been shown be highly relevant for horses. apply contrastive framework MoCo (Momentum Contrast Unsupervised Visual Representation Learning) predict using learning. significantly modify framework, building custom downstream classifier network that connects frozen CNN encoder is pretrained MoCo. Our method allows learn similarities differences within image groups its own without labels. The clusters thus formed are indicative deeper nuances complexities horse's mood, can possibly hint towards existence novel complex equine

Language: Английский

Citations

0

A segment-based framework for explainability in animal affective computing DOI Creative Commons

Tali Boneh-Shitrit,

Lauren Finka, Daniel S. Mills

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: April 21, 2025

Recent developments in animal motion tracking and pose recognition have revolutionized the study of behavior. More recent efforts extend beyond towards affect using facial body language analysis, with far-reaching applications welfare health. Deep learning models are most commonly used this context. However, their "black box" nature poses a significant challenge to explainability, which is vital for building trust encouraging adoption among researchers. Despite its importance, field explainability quantification remains under-explored. Saliency maps widely methods where each pixel assigned significance level indicating relevance neural network's decision. Although these frequently research, they predominantly applied qualitatively, limited quantitatively analyzing them or identifying suitable method specific task. In paper, we propose framework aimed at enhancing affective computing. Assuming availability classifier state ability generate saliency maps, our approach focuses on evaluating comparing visual explanations by emphasizing importance meaningful semantic parts captured as segments, thought be closely linked behavioral indicators states. Furthermore, introduces quantitative scoring mechanism assess how well generated given align predefined regions. This system allows systematic, measurable comparisons different pipelines terms within Such metric can serve quality indicator when developing classifiers known biologically relevant segments help researchers whether expected regions exploring new potential indicators. We evaluated three datasets focused cat horse pain dog emotions. Across all datasets, consistently revealed that eye area feature classifiers. These results highlight frameworks such suggested one uncover insights into machines 'see'

Language: Английский

Citations

0