Game Theory Approach to Identifying Deception in Large Language Models DOI Creative Commons

Tyler Di Maggio,

Robert Santiago

Опубликована: Июнь 12, 2024

The integration of AI-generated content into various applications has highlighted significant concerns regarding the potential for deceptive information, necessitating robust methods to ensure accuracy and trustworthiness outputs. Introducing a novel game theory-based framework identifying deception in language models, this study addresses critical need reliable verification mechanisms. By simulating interactions between liar verifier roles within same model, research provides structured approach evaluate enhance reliability automated systems. Key findings demonstrate effectiveness iterative prompt refinement strategic analysis detecting behaviors, contributing development more trustworthy AI applications. methodology offers comprehensive solution improving content, with broader implications its deployment sensitive domains such as healthcare legal services. Future directions include refining proposed expanding application encompass wider range including multimedia thereby ensuring robustness systems diverse real-world scenarios.

Язык: Английский

Fake News Detection with Large Language Models on the LIAR Dataset DOI Creative Commons

David Boissonneault,

Emily Hensen

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Май 24, 2024

Abstract The widespread dissemination of fake news poses a significant threat to the integrity information. Detecting with high accuracy is crucial for maintaining information in digital age. evaluation ChatGPT and Google Gemini models this task has revealed their substantial capabilities discerning veracity statements, highlighting potential mitigate spread misinformation. Using LIAR benchmark dataset, study demonstrated performance metrics across accuracy, precision, recall, F1 score, AUC-ROC, emphasizing effectiveness these real-world applications. comparative analysis error examination provided insights into strengths limitations each model, offering valuable guidance future enhancements. Practical implications include integration fact-checking systems improve content verification processes, supporting media organizations social platforms efforts combat findings prove importance ongoing research development refine optimize LLMs, ensuring continued relevance efficacy addressing challenges posed by news.

Язык: Английский

Процитировано

38

Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use DOI
Kristian González Barman, Nathan Gabriel Wood,

Pawel Pawlowski

и другие.

Ethics and Information Technology, Год журнала: 2024, Номер 26(3)

Опубликована: Июль 17, 2024

Язык: Английский

Процитировано

14

Crafting clarity: Leveraging large language models to decode consumer reviews DOI
Praveen SV, Pranshav Gajjar, Rajeev Kumar Ray

и другие.

Journal of Retailing and Consumer Services, Год журнала: 2024, Номер 81, С. 103975 - 103975

Опубликована: Июль 10, 2024

Язык: Английский

Процитировано

11

Do LLMs write like humans? Variation in grammatical and rhetorical styles DOI Creative Commons
Alex Reinhart, Ben Markey, Michael Laudenbach

и другие.

Proceedings of the National Academy of Sciences, Год журнала: 2025, Номер 122(8)

Опубликована: Фев. 18, 2025

Large language models (LLMs) are capable of writing grammatical text that follows instructions, answers questions, and solves problems. As they have advanced, it has become difficult to distinguish their output from human-written text. While past research found some differences in features such as word choice punctuation developed classifiers detect LLM output, none studied the rhetorical styles LLMs. Using several variants Llama 3 GPT-4o, we construct two parallel corpora human- LLM-written texts common prompts. Douglas Biber’s set lexical, grammatical, features, identify systematic between LLMs humans different These persist when moving smaller larger ones for instruction-tuned than base models. This observation demonstrates despite advanced abilities, struggle match human stylistic variation. Attention more linguistic can hence patterns behavior not previously recognized.

Язык: Английский

Процитировано

2

Emotional prompting amplifies disinformation generation in AI large language models DOI Creative Commons
Rasita Vinay, Giovanni Spitale, Nikola Biller‐Andorno

и другие.

Frontiers in Artificial Intelligence, Год журнала: 2025, Номер 8

Опубликована: Апрель 7, 2025

The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant for improving communication, such as in health-related crisis they also pose substantial risks by facilitating the creation convincing fake news disinformation. widespread dissemination AI-generated disinformation adds complexity to existing challenges ongoing infodemic, significantly affecting public health stability democratic institutions. Prompt engineering is a technique involves specific queries given LLMs. It has emerged strategy guide LLMs generating desired outputs. Recent research shows output depends on emotional framing within prompts, suggesting incorporating cues into prompts could influence their response behavior. In this study, we investigated how politeness or impoliteness affects frequency generation various We generated evaluated corpus 19,800 social media posts topics assess capabilities OpenAI's LLMs, including davinci-002, davinci-003, gpt-3.5-turbo, gpt-4. Our findings revealed all efficiently (davinci-002, 67%; 86%; 77%; gpt-4, 99%). Introducing polite prompt requests yielded higher success rates 79%; 90%; 94%; 100%). Impolite prompting resulted decrease production across 59%; 44%; 28%) slight reduction gpt-4 (94%). study reveals tested effectively generate Notably, had impact rates, with showing when prompted compared neutral impolite requests. investigation highlights be exploited create emphasizes critical need ethics-by-design approaches developing AI technologies. maintain identifying ways mitigate exploitation through crucial prevent misuse purposes detrimental society.

Язык: Английский

Процитировано

2

A Survey on the Use of Large Language Models (LLMs) in Fake News DOI Creative Commons

Eleftheria Papageorgiou,

Christos Chronis, Iraklis Varlamis

и другие.

Future Internet, Год журнала: 2024, Номер 16(8), С. 298 - 298

Опубликована: Авг. 19, 2024

The proliferation of fake news and profiles on social media platforms poses significant threats to information integrity societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, human fact-checking, have been employed combat disinformation, but these methods often fall short in the face increasingly sophisticated content. This review article explores emerging role Large Language Models (LLMs) enhancing profiles. We provide a comprehensive overview nature spread followed by an examination existing methodologies. delves into capabilities LLMs generating both profiles, highlighting their dual as tool for disinformation powerful means detection. discuss various applications text classification, verification, contextual demonstrating how models surpass traditional accuracy efficiency. Additionally, covers LLM-based through profile attribute network behavior pattern recognition. Through comparative we showcase advantages over conventional techniques present case studies that illustrate practical applications. Despite potential, challenges such computational demands ethical concerns, which more detail. concludes with future directions research development detection, underscoring importance continued innovation safeguard authenticity online information.

Язык: Английский

Процитировано

9

Artificial intelligence‐driven sustainability: Enhancing carbon capture for sustainable development goals– A review DOI

Sivasubramanian Manikandan,

R Kaviya,

Dhamodharan Hemnath Shreeharan

и другие.

Sustainable Development, Год журнала: 2024, Номер unknown

Опубликована: Окт. 6, 2024

Abstract Artificial intelligence (AI) and environmental points are equally important components within the response to local weather change. Therefore, based on efforts of reducing carbon emissions more efficiently effectively, this study tries focus AI integration with capture technology. The urgency tackling climate change means we need advanced capture, is an area where can make a huge impact in how these technologies operated managed. It will minimize manufacturing improve both resource efficiency as well our planet's footprint by turning waste into something value again. could be leveraged analyze data sets from plants, searching for optimal system settings efficient ways identifying patterns available information at larger scale than currently possible. In addition, incorporated sensors monitoring mechanisms supply chain identify any operational failure reception itself allowing timely action protect those areas. also helps generative design materials, which allows researchers explore new types carbon‐absorbing material, including metal–organic frameworks polymeric materials that industrial CO 2 , such moisture. it increases accuracy reservoir simulations controls injection systems storage or enhanced oil recovery. Through applying algorithms geology, production performance real‐time would like facilitate optimization processes while assuring maximum efficiency. integrates renewable‐based employed AI‐driven smart grid methods.

Язык: Английский

Процитировано

9

Pixel-Level Spectral Aflatoxin B1 Content Intelligent Prediction via Fine-Tuning Large Language Model (LLM) DOI
Hongfei Zhu, Yifan Zhao, Longgang Zhao

и другие.

Food Control, Год журнала: 2024, Номер unknown, С. 111071 - 111071

Опубликована: Дек. 1, 2024

Язык: Английский

Процитировано

3

Exploring Human Perception in Interactive Digital Advertising: A Genetic-Kansei Engineering Approach with Human-AI Collaboration DOI
Danni Chang, Luyao Wang, Yan Xiang

и другие.

Knowledge-Based Systems, Год журнала: 2025, Номер unknown, С. 113072 - 113072

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Analyzing the channels of information dissemination: Investigating abrupt transitions in resource investment DOI
Yanan Wang, Taiming Wang, Yikang Lu

и другие.

Chaos An Interdisciplinary Journal of Nonlinear Science, Год журнала: 2025, Номер 35(1)

Опубликована: Янв. 1, 2025

Investment in resources is essential for facilitating information dissemination real-world contexts, and comprehending the influence of resource allocation on is, thus, crucial efficacy collaborative networks. Nonetheless, current studies frequently fail to clarify complex interplay between distribution network contexts. In this work, we establish a resource-based model identify by examining propagation threshold equilibriums. We assess model’s juxtaposing mean-field method with Monte Carlo simulations across three author collaboration addition, define function evaluate applicability using propagating threshold, time evolution, parametric analyses. Our findings indicate that an increase available accelerates expands information. Notably, abrupt transition phenomena concerning demonstrate self-learning rate review hasten transition, while decline re-diffusion rates decelerate it.

Язык: Английский

Процитировано

0