
JMIR Infodemiology, Journal Year: 2024, Volume and Issue: unknown
Published: July 18, 2024
Language: Английский
JMIR Infodemiology, Journal Year: 2024, Volume and Issue: unknown
Published: July 18, 2024
Language: Английский
British Journal of Educational Technology, Journal Year: 2025, Volume and Issue: unknown
Published: Jan. 15, 2025
While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual technical investigations—to centre development evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives expertise, help students refine their causal models climate change's impact on local marine ecosystems, communities individuals. We first perform an empirical investigation leveraging participatory explore that motivate educators engage chatbots. Then, conceptualize emerge from by grounding them research design, values, human‐AI interactions education. Findings illuminate considerations for students' identity development, well‐being, human–chatbot relationships sustainability. further map onto principles illustrate how these guide Our demonstrates conduct contextual, value‐sensitive inquiries emergent technologies educational settings. Practitioner notes What is already known about topic Generative artificial intelligence (GenAI) not only learning, but also raise concerns such as transparency, trust accountability. Value‐sensitive (VSD) presents systematic approach centring design. paper adds VSD education identify central supporting learning. investigations several stages GenAI development: conceptualization, evaluation. Implications practice and/or policy Identity human–AI sustainability are key designing Using stakeholders' generate metrics promote adoption engagement.
Language: Английский
Citations
1Future Internet, Journal Year: 2024, Volume and Issue: 16(8), P. 298 - 298
Published: Aug. 19, 2024
The proliferation of fake news and profiles on social media platforms poses significant threats to information integrity societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, human fact-checking, have been employed combat disinformation, but these methods often fall short in the face increasingly sophisticated content. This review article explores emerging role Large Language Models (LLMs) enhancing profiles. We provide a comprehensive overview nature spread followed by an examination existing methodologies. delves into capabilities LLMs generating both profiles, highlighting their dual as tool for disinformation powerful means detection. discuss various applications text classification, verification, contextual demonstrating how models surpass traditional accuracy efficiency. Additionally, covers LLM-based through profile attribute network behavior pattern recognition. Through comparative we showcase advantages over conventional techniques present case studies that illustrate practical applications. Despite potential, challenges such computational demands ethical concerns, which more detail. concludes with future directions research development detection, underscoring importance continued innovation safeguard authenticity online information.
Language: Английский
Citations
9Nutrients, Journal Year: 2025, Volume and Issue: 17(4), P. 607 - 607
Published: Feb. 7, 2025
Background/Objectives: Advances in artificial intelligence now allow combined use of large language and vision models; however, there has been limited evaluation their potential dietary assessment. This study aimed to evaluate the accuracy ChatGPT-4 estimating nutritional content commonly consumed meals using meal photographs derived from national survey data. Methods: Meal (n = 114) were uploaded ChatGPT it was asked identify foods each meal, estimate weight, nutrient for 16 nutrients comparison with known values precision, paired t-tests, Wilcoxon signed rank test, percentage difference, Spearman correlation (rs). Seven dietitians also estimated energy, protein, carbohydrate thirty-eight intraclass (ICC). Results: Comparing actual meals, showed good precision (93.0%) correctly identifying photographs. There agreement weight (p 0.221) small but poor medium < 0.001) meals. 10 0.05). Percentage difference >10% 13 nutrients, underestimating 11 nutrients. Correlations adequate or all rs ranging 0.29 0.83. When comparing dietitians, ICC ranged 0.31 0.67 across Conclusions: performed well foods, weights portion sizes, ranking according content, poorly sizes providing accurate estimates content.
Language: Английский
Citations
0Social Network Analysis and Mining, Journal Year: 2025, Volume and Issue: 15(1)
Published: March 9, 2025
Language: Английский
Citations
0Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown
Published: March 31, 2025
Language: Английский
Citations
0Published: July 18, 2024
Language: Английский
Citations
0Lecture notes in computer science, Journal Year: 2024, Volume and Issue: unknown, P. 391 - 405
Published: Nov. 26, 2024
Language: Английский
Citations
0JMIR Infodemiology, Journal Year: 2024, Volume and Issue: unknown
Published: July 18, 2024
Language: Английский
Citations
0