EmoMBTI-Net: introducing and leveraging a novel emoji dataset for personality profiling with large language models DOI Creative Commons
Akshi Kumar, Dipika Jain

Social Network Analysis and Mining, Journal Year: 2024, Volume and Issue: 14(1)

Published: Dec. 10, 2024

Abstract Emojis, integral to digital communication, often encapsulate complex emotional layers that enhance text beyond mere words. This research leverages the expressive power of emojis predict Myers-Briggs Type Indicator (MBTI) personalities, diverging from conventional text-based approaches. We developed a unique dataset, EmoMBTI, by mapping specific MBTI traits using diverse posts scraped Reddit. dataset enabled integration Natural Language Processing (NLP) techniques tailored for emoji analysis. Large Models (LLMs) such as FlanT5, BART, and PEGASUS were trained generate contextual linkages between emojis, further correlating these with personalities. Following creation this LLMs applied understand context conveyed subsequently fine-tuned. Additionally, transformer models like RoBERTa, DeBERTa, BART specifically fine-tuned personalities based on mappings posts. Our methodology significantly enhances capability personality assessments, model achieving an impressive accuracy 0.875 in predicting types, which notably exceeds performances RoBERTa at 0.82 0.84 respectively. By leveraging nuanced communication potential approach not only advances profiling but also deepens insights into behaviour, highlighting substantial impact emotive icons online interactions.

Language: Английский

Adaptive information fusion network for multi‐modal personality recognition DOI
Yongtang Bao, Xiang Qi Liu,

Yue Qi

et al.

Computer Animation and Virtual Worlds, Journal Year: 2024, Volume and Issue: 35(3)

Published: May 1, 2024

Abstract Personality recognition is of great significance in deepening the understanding social relations. While personality methods have made significant strides recent years, challenge heterogeneity between modalities during feature fusion still needs to be solved. This paper introduces an adaptive multi‐modal information network (AMIF‐Net) capable concurrently processing video, audio, and text data. First, utilizing AMIF‐Net encoder, we process extracted audio video features separately, effectively capturing long‐term data relationships. Then, adding elements can alleviate problem modes. Lastly, concatenate audio‐video into a regression obtain Big Five trait scores. Furthermore, introduce novel loss function address training inaccuracies, taking advantage its unique property exhibiting peak at critical mean. Our tests on ChaLearn First Impressions V2 dataset show partial performance surpassing state‐of‐the‐art networks.

Language: Английский

Citations

5

SVFAP: Self-supervised Video Facial Affect Perceiver DOI
Licai Sun, Zheng Lian, Kexin Wang

et al.

IEEE Transactions on Affective Computing, Journal Year: 2024, Volume and Issue: 16(1), P. 405 - 422

Published: Aug. 5, 2024

Language: Английский

Citations

5

Bimodal Self-Esteem Recognition: A Multi-Scenario Approach Based on Psychology DOI
Xinlei Zang, Juan Yang

Published: Jan. 1, 2025

Language: Английский

Citations

0

Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning DOI
Yongtang Bao,

Yang Wang,

Yutong Qi

et al.

Knowledge-Based Systems, Journal Year: 2025, Volume and Issue: unknown, P. 113504 - 113504

Published: April 1, 2025

Language: Английский

Citations

0

A multimodal personality prediction framework based on adaptive graph transformer network and multi‐task learning DOI
Rongquan Wang, Xi-Le Zhao,

Xianyu Xu

et al.

Computer Graphics Forum, Journal Year: 2025, Volume and Issue: unknown

Published: April 14, 2025

Abstract Multimodal personality analysis targets accurately detecting traits by incorporating related multimodal information. However, existing methods focus on unimodal features while overlooking the bimodal association crucial for this interdisciplinary task. Therefore, we propose a prediction framework based an adaptive graph transformer network and multi‐task learning. Firstly, utilize pre‐trained models to learn specific representations from different modalities. Here, employ models' encoders as backbones of modality‐specific extraction mine features. Specifically, introduce novel personality‐related This effectively learns higher‐order temporal dependencies relational graphs emphasizes more significant Furthermore, channel attention residual fusion module obtain fused features, joint learning regression head predict scores traits. We design loss function enhance robustness accuracy prediction. Experimental results two benchmark datasets demonstrate effectiveness our framework, which outperforms state‐of‐the‐art methods. The code is available at https://github.com/RongquanWang/PPF-AGTNMTL .

Language: Английский

Citations

0

Machine and deep learning for personality traits detection: a comprehensive survey and open research challenges DOI Creative Commons
Anam Naz, Hikmat Ullah Khan, Amal Bukhari

et al.

Artificial Intelligence Review, Journal Year: 2025, Volume and Issue: 58(8)

Published: May 9, 2025

Language: Английский

Citations

0

Unsupervised Multimodal Learning for Dependency-Free Personality Recognition DOI
Sina Ghassemi, Tianyi Zhang,

Ward van Breda

et al.

IEEE Transactions on Affective Computing, Journal Year: 2023, Volume and Issue: 15(3), P. 1053 - 1066

Published: Sept. 22, 2023

Recent advances in AI-based learning models have significantly increased the accuracy of Automatic Personality Recognition (APR). However, these methods either require training data from same subject or meta-information set to learn personality-related features (i.e., subject-dependency). The variance feature extraction for different subjects compromises possibility designing a dependency-free system APR. To address this problem, we present an unsupervised multimodal framework infer personality traits audio, visual, and verbal modalities. Our method both extracts handcraft transfers deep-learning based embeddings other tasks (e.g., emotion recognition) recognize traits. Since representations are extracted locally time domain, temporal aggregation aggregate over dimension. We evaluate our on ChaLearn dataset, most widely referenced dataset APR, using split dataset. results show that proposed modules do not annotations but still outperform state-of-the-art baseline methods. also problem subject-dependency original newly training, validation, testing) can benefit community by providing more accurate validate subject-generalizability APR algorithms.

Language: Английский

Citations

5

EmoMBTI-Net: Introducing and Leveraging a Novel Emoji Dataset for Personality Profiling with Large Language Models DOI Creative Commons
Akshi Kumar, Dipika Jain

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 4, 2024

Abstract Emojis, integral to digital communication, often encapsulate complex emotional layers that enhance text beyond mere words. This research leverages the expressive power of emojis predict Myers-Briggs Type Indicator (MBTI) personalities, diverging from conventional text-based approaches. We developed a unique dataset, EmoMBTI, by mapping specific MBTI traits using diverse posts scraped Reddit. dataset enabled integration Natural Language Processing (NLP) techniques tailored for emoji analysis. Large Models (LLMs) such as FlanT5, BART, and Pegasus were trained generate contextual linkages between emojis, further correlating these with personalities. Following creation this LLMs applied understand context conveyed subsequently fine-tuned. Additionally, transformer models like Roberta, DeBERTa, BART specifically fine-tuned personalities based on mappings posts. Our methodology significantly enhances capability personality assessments, model achieving an impressive accuracy 0.875 in predicting types, which notably exceeds performances Roberta at 0.82 0.84 respectively. By leveraging nuanced communication potential approach not only advances profiling but also deepens insights into behaviour, highlighting substantial impact emotive icons online interactions.

Language: Английский

Citations

0

EmoMBTI-Net: introducing and leveraging a novel emoji dataset for personality profiling with large language models DOI Creative Commons
Akshi Kumar, Dipika Jain

Social Network Analysis and Mining, Journal Year: 2024, Volume and Issue: 14(1)

Published: Dec. 10, 2024

Abstract Emojis, integral to digital communication, often encapsulate complex emotional layers that enhance text beyond mere words. This research leverages the expressive power of emojis predict Myers-Briggs Type Indicator (MBTI) personalities, diverging from conventional text-based approaches. We developed a unique dataset, EmoMBTI, by mapping specific MBTI traits using diverse posts scraped Reddit. dataset enabled integration Natural Language Processing (NLP) techniques tailored for emoji analysis. Large Models (LLMs) such as FlanT5, BART, and PEGASUS were trained generate contextual linkages between emojis, further correlating these with personalities. Following creation this LLMs applied understand context conveyed subsequently fine-tuned. Additionally, transformer models like RoBERTa, DeBERTa, BART specifically fine-tuned personalities based on mappings posts. Our methodology significantly enhances capability personality assessments, model achieving an impressive accuracy 0.875 in predicting types, which notably exceeds performances RoBERTa at 0.82 0.84 respectively. By leveraging nuanced communication potential approach not only advances profiling but also deepens insights into behaviour, highlighting substantial impact emotive icons online interactions.

Language: Английский

Citations

0