Key Algorithms for Keyphrase Generation: Instruction-Based LLMs for Russian Scientific Keyphrases DOI
Anna Glazkova, Dmitry Morozov, Timur Garipov

и другие.

Lecture notes in computer science, Год журнала: 2025, Номер unknown, С. 107 - 119

Опубликована: Янв. 1, 2025

Язык: Английский

A comprehensive review of machine learning and its application to dairy products DOI
Paulina Freire,

Diego Alencar Freire,

Carmen C. Licón

и другие.

Critical Reviews in Food Science and Nutrition, Год журнала: 2024, Номер unknown, С. 1 - 16

Опубликована: Фев. 13, 2024

Machine learning (ML) technology is a powerful tool in food science and engineering offering numerous advantages, from recognizing patterns predicting outcomes to customizing adjusting individual needs. Its further development can enable researchers industries significantly enhance the efficiency of dairy processing while providing valuable insights into field. This paper presents an overview role machine industry its potential improve processing. We performed systematic search for articles published between January 2003 2023 related products highlighted algorithms used. 48 studies are discussed assist identifying best methods that could be applied their field relevant ideas future research directions. Moreover, step-by-step guide process, including classification different algorithms, provided. review focuses on state-of-the-art applications milk transformation other products, but it also perspectives conclusions. The study serves as individuals interested about or getting involved with ML.

Язык: Английский

Процитировано

13

Gemini AI vs. ChatGPT: A comprehensive examination alongside ophthalmology residents in medical knowledge DOI
Daniel Bahir,

Omri Zur,

Leah Attal

и другие.

Graefe s Archive for Clinical and Experimental Ophthalmology, Год журнала: 2024, Номер unknown

Опубликована: Сен. 15, 2024

Язык: Английский

Процитировано

11

A Scoping Review of ChatGPT Research in Accounting and Finance DOI

Mengming Dong,

Theophanis C. Stratopoulos, Victor Xiaoqi Wang

и другие.

SSRN Electronic Journal, Год журнала: 2024, Номер unknown

Опубликована: Янв. 1, 2024

This paper provides a review of recent publications and working papers on ChatGPT related Large Language Models (LLMs) in accounting finance. The aim is to understand the current state research these two areas identify potential opportunities for future inquiry. We three common themes from earlier studies. first theme focuses applications LLMs various fields second utilizes as new tool by leveraging their capabilities such classification, summarization, text generation. third investigates implications LLM adoption finance professionals, well organizations sectors. While studies provide valuable insights, they leave many important questions unanswered or partially addressed. propose venues further exploration technical guidance researchers seeking employ research.

Язык: Английский

Процитировано

8

Integrating human expertise & automated methods for a dynamic and multi-parametric evaluation of large language models’ feasibility in clinical decision-making DOI Creative Commons

Elena Sblendorio,

Vincenzo Dentamaro, Alessio Lo Cascio

и другие.

International Journal of Medical Informatics, Год журнала: 2024, Номер 188, С. 105501 - 105501

Опубликована: Май 26, 2024

Recent enhancements in Large Language Models (LLMs) such as ChatGPT have exponentially increased user adoption. These models are accessible on mobile devices and support multimodal interactions, including conversations, code generation, patient image uploads, broadening their utility providing healthcare professionals with real-time for clinical decision-making. Nevertheless, many authors highlighted serious risks that may arise from the adoption of LLMs, principally related to safety alignment ethical guidelines. To address these challenges, we introduce a novel methodological approach designed assess specific feasibility adopting LLMs within area, focus nursing, evaluating performance thereby directing choice. Emphasizing LLMs' adherence scientific advancements, this prioritizes care personalization, according "Organization Economic Co-operation Development" frameworks responsible AI. Moreover, its dynamic nature is adapt future evolutions LLMs. Through integrating advanced multidisciplinary knowledge, Nursing Informatics, aided by prospective literature review, seven key domains evaluation items were identified follows: State Art Alignment & Safety. Focus, Accuracy Management Prompt Ambiguity. Data Integrity, Security, Ethics Sustainability, accordance OECD Recommendations Responsible Temporal Variability Responses (Consistency) Adaptation standardized terminology Classifications professionals. General Capabilities: Post User Feedback Self-Evolution Capability Organization Chapters. Ability Drive Evolution Healthcare. Nine state art evaluated using methodology oncology nursing decision-making, producing preliminary results. Gemini Advanced, Anthropic Claude 3 4 achieved minimum score Safety domain classification "recommended", being also endorsed across all domains. LLAMA 70B 3.5 classified "usable high caution." Others unusable domain. The identification recommended LLM combined critical, prudent, integrative use, can decision-making processes.

Язык: Английский

Процитировано

8

Navigating the Web of Disinformation and Misinformation: Large Language Models as Double-Edged Swords DOI Creative Commons

Siddhant Bikram Shah,

Surendrabikram Thapa,

Ashish Acharya

и другие.

IEEE Access, Год журнала: 2024, Номер unknown, С. 1 - 1

Опубликована: Янв. 1, 2024

This paper explores the dual role of Large Language Models (LLMs) in context online misinformation and disinformation. In today's digital landscape, where internet social media facilitate rapid dissemination information, discerning between accurate content falsified information presents a formidable challenge. Misinformation, often arising unintentionally, disinformation, crafted deliberately, are at forefront this LLMs such as OpenAI's GPT-4, equipped with advanced language generation abilities, present double-edged sword scenario. While they hold promise combating by fact-checking detecting LLM-generated text, their ability to generate realistic, contextually relevant text also poses risks for creating propagating misinformation. Further, plagued many problems biases, knowledge cutoffs, hallucinations, which may further perpetuate The outlines historical developments detection how it affects consumption, especially among youth, introduces applications various domains. It then critically analyzes potential counter disinformation sensitive topics healthcare, COVID-19, political agendas. discusses mitigation strategies, ethical considerations, regulatory measures, summarizing previous methods proposing future research direction toward leveraging benefits while minimizing misuse risks. concludes acknowledging powerful tools significant implications both spreading age.

Язык: Английский

Процитировано

8

Comparing patient education tools for chronic pain medications: Artificial intelligence chatbot versus traditional patient information leaflets DOI Creative Commons
Prakash Gondode, Sakshi Duggal, Neha Garg

и другие.

Indian Journal of Anaesthesia, Год журнала: 2024, Номер 68(7), С. 631 - 636

Опубликована: Июнь 6, 2024

Background and Aims: Artificial intelligence (AI) chatbots like Conversational Generative Pre-trained Transformer (ChatGPT) have recently created much buzz, especially regarding patient education. Such informed patients understand adhere to the management get involved in shared decision making. The accuracy understandability of generated educational material are prime concerns. Thus, we compared ChatGPT with traditional information leaflets (PILs) about chronic pain medications. Methods: Patients' frequently asked questions were from PILs available on official websites British Pain Society (BPS) Faculty Medicine. Eight blinded annexures prepared for evaluation, consisting BPS AI-generated materials structured similar by ChatGPT. authors performed a comparative analysis assess materials’ readability, emotional tone, accuracy, actionability, understandability. Readability was measured using Flesch Reading Ease (FRE), Gunning Fog Index (GFI), Flesch-Kincaid Grade Level (FKGL). Sentiment determined tone. An expert panel evaluated completeness. Actionability assessed Patient Education Materials Assessment Tool. Results: Traditional generally exhibited higher readability ( P values < 0.05), [mean (standard deviation)] FRE [62.25 (1.6) versus 48 (3.7)], GFI [11.85 (0.9) 13.65 (0.7)], FKGL [8.33 (0.5) 10.23 (0.5)] but varied tones, often negative, more positive sentiments ChatGPT-generated texts. Accuracy completeness did not significantly differ between two. scores comparable. Conclusion: While AI offer efficient delivery, ensuring patient-centeredness remains crucial. It is imperative balance innovation evidence-based practice.

Язык: Английский

Процитировано

8

Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use DOI
Kristian González Barman, Nathan Gabriel Wood,

Pawel Pawlowski

и другие.

Ethics and Information Technology, Год журнала: 2024, Номер 26(3)

Опубликована: Июль 17, 2024

Язык: Английский

Процитировано

8

A survey on Deep Learning in Edge-Cloud Collaboration: Model partitioning, privacy preservation, and prospects DOI
Xichen Zhang, Roozbeh Razavi–Far, Haruna Isah

и другие.

Knowledge-Based Systems, Год журнала: 2025, Номер unknown, С. 112965 - 112965

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

1

Prompt the problem – investigating the mathematics educational quality of AI-supported problem solving by comparing prompt techniques DOI Creative Commons
Sebastian Schorcht, Nils Buchholtz, Lukas Baumanns

и другие.

Frontiers in Education, Год журнала: 2024, Номер 9

Опубликована: Май 9, 2024

The use of and research on the large language model (LLM) Generative Pretrained Transformer (GPT) is growing steadily, especially in mathematics education. As students teachers worldwide increasingly this AI for teaching learning mathematics, question quality generated output becomes important. Consequently, study evaluates AI-supported mathematical problem solving with different GPT versions when LLM subjected to prompt techniques. To assess educational (content related process related) LLM’s output, we facilitated four techniques investigated their effects validations ( N = 1,080) using three problem-based tasks. Subsequently, human raters scored output. results showed that content-related was not significantly affected by various across versions. However, certain techniques, particular Chain-of-Thought Ask-me-Anything, notably improved process-related quality.

Язык: Английский

Процитировано

7

Large Language Models for Intraoperative Decision Support in Plastic Surgery: A Comparison between ChatGPT-4 and Gemini DOI Creative Commons
Cesar A. Gomez-Cabello, Sahar Borna, Sophia M. Pressman

и другие.

Medicina, Год журнала: 2024, Номер 60(6), С. 957 - 957

Опубликована: Июнь 8, 2024

Background and Objectives: Large language models (LLMs) are emerging as valuable tools in plastic surgery, potentially reducing surgeons’ cognitive loads improving patients’ outcomes. This study aimed to assess compare the current state of two most common readily available LLMs, Open AI’s ChatGPT-4 Google’s Gemini Pro (1.0 Pro), providing intraoperative decision support reconstructive surgery procedures. Materials Methods: We presented each LLM with 32 independent scenarios spanning 5 utilized a 5-point 3-point Likert scale for medical accuracy relevance, respectively. determined readability responses using Flesch–Kincaid Grade Level (FKGL) Flesch Reading Ease (FRE) score. Additionally, we measured models’ response time. compared performance Mann–Whitney U test Student’s t-test. Results: significantly outperformed accurate (3.59 ± 0.84 vs. 3.13 0.83, p-value = 0.022) relevant (2.28 0.77 1.88 0.032) responses. Alternatively, provided more concise readable responses, an average FKGL (12.80 1.56) lower than ChatGPT-4′s (15.00 1.89) (p < 0.0001). However, there was no difference FRE scores 0.174). Moreover, Gemini’s time faster (8.15 1.42 s) ChatGPT’-4′s (13.70 2.87 Conclusions: Although both demonstrated potential tools. Nevertheless, their inconsistency across different procedures underscores need further training optimization ensure reliability decision-support

Язык: Английский

Процитировано

7