Use of Chatbots to Support the Inclusion of People With Autism Spectrum Disorder DOI
Ayşe Tuna

Advances in educational technologies and instructional design book series, Год журнала: 2024, Номер unknown, С. 421 - 435

Опубликована: Июнь 3, 2024

It is known that individuals with autism spectrum disorder enjoy interacting technological tools and enjoying being engaged them. Because these interactions happen in a trustworthy safe environment for Therefore, this chapter presents how chatbots can help children disorder, reviews the advantages disadvantages of chatbots. In addition, challenges limit use various ethical problems raised by are reviewed. Finally, future research directions domain presented.

Язык: Английский

AI Chatbots and Cognitive Control: Enhancing Executive Functions Through Chatbot Interactions: A Systematic Review DOI Creative Commons
Pantelis Pergantis, Victoria Bamicha,

Charalampos Skianis

и другие.

Brain Sciences, Год журнала: 2025, Номер 15(1), С. 47 - 47

Опубликована: Янв. 6, 2025

Background/Objectives: The evolution of digital technology enhances the broadening a person's intellectual growth. Research points out that implementing innovative applications world improves human social, cognitive, and metacognitive behavior. Artificial intelligence chatbots are yet another human-made construct. These forms software simulate conversation, understand process user input, provide personalized responses. Executive function includes set higher mental processes necessary for formulating, planning, achieving goal. present study aims to investigate executive reinforcement through artificial chatbots, outlining potentials, limitations, future research suggestions. Specifically, examined three questions: use conversational in functioning training, their impact on executive-cognitive skills, duration any improvements. Methods: assessment existing literature was implemented using systematic review method, according PRISMA 2020 Principles. avalanche search method employed conduct source following databases: Scopus, Web Science, PubMed, complementary Google Scholar. This included studies from 2021 experimental, observational, or mixed methods. It AI-based conversationalists support functions, such as anxiety, stress, depression, memory, attention, cognitive load, behavioral changes. In addition, this general populations with specific neurological conditions, all peer-reviewed, written English, full-text access. However, excluded before 2021, reviews, non-AI-based conversationalists, not targeting range skills abilities, without open criteria aligned objectives, ensuring focus AI agents function. initial collection totaled n = 115 articles; however, eligibility requirements led final selection 10 studies. Results: findings suggested positive effects enhance improve skills. Although, several limitations were identified, making it still difficult generalize reproduce effects. Conclusions: an tool can assistant learning expanding contributing metacognitive, social development individual. its training is at primary stage. highlighted need unified framework reference studies, better designs, diverse populations, larger sample sizes participants, longitudinal observe long-term use.

Язык: Английский

Процитировано

3

Exploring the impact of integrating AI tools in higher education using the Zone of Proximal Development DOI
Lianyu Cai, Msafiri Mgambi Msambwa, Daniel Kangwa

и другие.

Education and Information Technologies, Год журнала: 2024, Номер unknown

Опубликована: Окт. 22, 2024

Язык: Английский

Процитировано

9

Mapping Tomorrow’s Teaching and Learning Spaces: A Systematic Review on GenAI in Higher Education DOI Creative Commons
Tanja Tillmanns, Alfredo Salomão Filho,

Susmita Rudra

и другие.

Trends in Higher Education, Год журнала: 2025, Номер 4(1), С. 2 - 2

Опубликована: Янв. 8, 2025

This collective systematic literature review is part of an Erasmus+ project, “TaLAI: Teaching and Learning with AI in Higher Education”. The investigates the current state Generative Artificial Intelligence (GenAI) higher education, aiming to inform curriculum design further developments within digital education. Employing a descriptive, textual narrative synthesis approach, study analysed across four thematic areas: learning objectives, teaching activities, development, institutional support for ethical responsible GenAI use. 93 peer-reviewed articles from eight databases using keyword-based search strategy, collaborative coding process involving multiple researchers, vivo transparent documentation. findings provide overview recommendations integrating into learning, contributing development effective AI-enhanced environments reveals consensus on importance incorporating Common themes like mentorship, personalised creativity, emotional intelligence, higher-order thinking highlight persistent need align human-centred educational practices capabilities technologies.

Язык: Английский

Процитировано

1

The Impact of AI on the Personal and Collaborative Learning Environments in Higher Education DOI Open Access
Msafiri Mgambi Msambwa, Zhang Wen, Daniel Kangwa

и другие.

European Journal of Education, Год журнала: 2025, Номер 60(1)

Опубликована: Янв. 7, 2025

ABSTRACT Artificial intelligence (AI) has extensively developed, impacting different sectors of society, including higher education, and attracted the attention various educational stakeholders, leading to a growing number research on its integration into education. Hence, this systematic literature review examines impact integrating AI tools in education students' personal collaborative learning environments. Analysis 148 articles published between 2021 2024 indicates that Tools improve personalised assessments, communication engagement, scaffolding performance motivation. Additionally, they promote environment by providing peer‐learning opportunities, enhanced learner‐content interaction cooperative support. Indeed, strategies such as skills development, ethical use, academic integrity instructional content design. Acknowledged limitations include considerations, particularly privacy bias, which require ongoing attention. it is recommended create good balance AI‐mediated human environments, key area future exploration.

Язык: Английский

Процитировано

1

A Social Perspective on AI in the Higher Education System: A Semisystematic Literature Review DOI Open Access

Budur Turki Alshahrani,

Salvatore F. Pileggi, Faezeh Karimi

и другие.

Electronics, Год журнала: 2024, Номер 13(8), С. 1572 - 1572

Опубликована: Апрель 19, 2024

The application of Artificial Intelligence in Education (AIED) is experiencing widespread interest among students, educators, researchers, and policymakers. AIED expected, other things, to enhance learning environments the higher education system. However, line with general trends, there are also increasing concerns about possible negative collateral effects. consequent social impact cannot be currently assessed depth. Balancing benefits considerations according a socio-technical approach essential for harnessing true power AI responsible trustworthy context. This study proposes semi-systematic literature review available knowledge on adoption artificial intelligence (AI) It presents stakeholder-centric analysis explore multiple perspectives, including pedagogical, managerial, technological, governmental, external, ones. main goal identify discuss major gaps challenges context, looking at existing body momentum. should encompass ethical, dimensions properly addressed. highlights not-always-explicit perspective. Additionally, this reveals significant lack empirical systematic evaluation added value institutional readiness. Because broad scope intense ongoing debate topic, an exhaustive identification current probably unrealistic, so aims mainly mainstream trends by most recent contributions.

Язык: Английский

Процитировано

5

AI for chemistry teaching: responsible AI and ethical considerations DOI Creative Commons
Ron Blonder, Yael Feldman-Maggor

Chemistry Teacher International, Год журнала: 2024, Номер unknown

Опубликована: Окт. 15, 2024

Abstract This paper discusses the ethical considerations surrounding generative artificial intelligence (GenAI) in chemistry education, aiming to guide teachers toward responsible AI integration. GenAI, driven by advanced models like Large Language Models, has shown substantial potential generating educational content. However, this technology’s rapid rise brought forth concerns regarding general and use that require careful attention from educators. The UNESCO framework on GenAI education provides a comprehensive controversies around considerations, emphasizing human agency, inclusion, equity, cultural diversity. Ethical issues include digital poverty, lack of national regulatory adaptation, content without consent, unexplainable used generate outputs, AI-generated polluting internet, understanding real world, reducing diversity opinions, further marginalizing already marginalized voices deep fakes. delves into these eight controversies, presenting relevant examples stress need evaluate critically. emphasizes importance relating teachers’ pedagogical knowledge argues usage must integrate insights prevent propagation biases inaccuracies. conclusion stresses necessity for teacher training effectively ethically employ practices.

Язык: Английский

Процитировано

5

Why we need to be careful with LLMs in medicine DOI Creative Commons
Jean‐Christophe Bélisle‐Pipon

Frontiers in Medicine, Год журнала: 2024, Номер 11

Опубликована: Дек. 4, 2024

Large language models (LLMs), the core of many generative AI (genAI) tools, are gaining attention for their potential applications in healthcare. These wide-ranging, including tasks such as assisting with diagnostic processes, streamlining patient communication, and providing decision support to healthcare professionals. Their ability process generate large volumes text makes them promising tools managing medical documentation enhancing efficiency clinical workflows (Harrer, 2023). LLMs offer a distinct advantage that they relatively straightforward use, particularly since introduction ChatGPT-3.5, exhibit notable alignment human communication patterns, facilitating more natural interactions (Ayers et al., 2023) acceptance LLMs' conclusions (Shekar 2024). operate by predicting next word sequence based on statistical correlations identified datasets (Patil 2021;Schubert However, while these effective at producing appears coherent contextually appropriate, do so without genuine understanding meaning or context. This limitation is significant healthcare, where accuracy critical. Unlike cognition, which driven complex array goals behaviors, narrowly focused generation. focus can lead production plausible sounding but inaccurate information, phenomenon referred "AI hallucination" (OpenAI In high-stakes environments like prediction, triaging, diagnosis, monitoring, care, inaccuracies have serious consequences.While numerous articles across various Frontiers journals discuss LLMs, few hallucinations central issue. For example, Jin al. (2023) Medicine note "While ChatGPT tremendous ophthalmology, addressing challenges hallucination misinformation paramount." Similarly, Giorgino Surgery emphasize "The responsible use this tool must be an awareness its limitations biases. Foremost among dangerous concept hallucination." Beyond realm Williams (2024) Education observes gained widespread around 2022, coinciding rise ChatGPT. Users noticed chatbots often generated random falsehoods responses, seemingly indifferent relevance accuracy." continues stressing "term has been criticized anthropomorphic connotations, it likens perception behavior models." Despite critical discussions, remain sparse compared praising medicine, highlighting need greater engagement technologies. imbalance highlights emphasis mitigating risks posed models. Building concern, Hicks, Humphries, Slater challenge conventional thinking paper "ChatGPT Bullshit." They assert produced should not simply labeled "hallucinations," "bullshit," term philosopher Harry Frankfurt's (2009) work. According perspective, "bullshit" reflects disregard accuracy, poses genAI By reconceptualizing "bullshiting" instead "hallucinating," aims provide perspective pose applications. It explores practical solutions layered LLM architectures improved XAI methods, emphasizes urgency implementing tailored oversight mechanisms counterbalance political industry push deregulation sensitive domains medicine.LLMs datasets. While produce human-like text, don't inherently understand verify acting "prop-oriented make-believe tools" (Mallory, errors result technical glitches resolved better data refined algorithms stem from fundamental nature-they evaluate evidence reason sense. distinction between processing reasoning misconceptions, when portrayed perceived capable cognition. accurate relevant outputs correlations, comprehension. As Bender (2021) famously argued, sequences learned function "stochastic parrots." contrast, involves deeper cognitive processes understanding, thinking, interpretation. some, Downes (2024), view, suggesting sensible answers leveraging higher-level structural information inherent design, fact remains fundamentally agnostic empirical reality. Recognizing crucial, predictions made models-no matter how convincing-should equated deliberate, evidence-based mind. When systems make mistakes, because malfunctioning way fixed tweaked algorithms. arbitrate first place. Hicks point out: trying communicate something believe perceive. inaccuracy due misperception hallucination. we pointed out, convey all. bullshitting." indifference especially concerning interpretability, liability paramount. Consider implications using advice assist diagnosing patients-if nature misunderstood, risks. Trusting potentially flawed could misdiagnoses improper treatments, consequences care. stated Harrer (2023): "Health buyers beware: experimental technology yet ready primetime."Recognizing rather than "hallucinations" calls cautious skeptical approach, according colleagues. Titus convincingly "Attributing semantic warranted doing social ethical related anthropormorphizing (sic) over-trusting meaningful truthful responses." health sector, implies that, mMedical professionals wary about avoid standalone sources (Cohen, Instead, serve supplementary all rigorously validated experts before being applied used any setting. The medicine significant. If truth, there heightened responsibility developers users ensure cause harm. only improving also clearly communicating users. al note, "Calling chatbot 'hallucinations' feeds into overblown hype abilities cheerleaders, unnecessary consternation general public. suggests problems might work, misguided efforts amongst specialists." Given expert validation both design prior (Bélisle-Pipon 2021;Cohen, 2023).Ensuring trustworthiness requires shared responsibility, creating transparent critically assessing (Amann 2020;Díaz-Rodríguez 2023;Siala & Wang, 2022;Smith, 2021). Medical trained AI-generated content may sound convincing, always reliable. Developers prioritize interfaces highlight encourage evaluation outputs. disclaimers confidence scores help assess reliability provided (Gallifant basically what Notice Explanation section White House's Bill Rights (2022) requires: "Medical source advice. tool, setting." disclosure enough itself conducive problems, shifting burden onto Such accessible understandable does reproduce consumer products' Terms Conditions, ridiculously long nobody reads (Solove, 2024).Employing multiple layers mitigate individual solve previously raised issues. Work currently underway area (Farquhar Usually entails enabling one model cross-validate another identify correct inaccuracies, thereby reducing incidence wherein different assigned specialized factchecking contextual validation, enhance robustness (Springer, methodology introduces complexity, risk error propagation associated coordination Furthermore, strategy, Verspoor "fighting fire fire," incrementally improve outputs, fails address foundational issue lack true understanding. An over-reliance diminishing returns, added complexity novel negate anticipated benefits enhanced accuracy. Additionally, approach fostering overdependence (Levinstein Herrmann, 2024), undermining role expertise requiring nuanced decision-making.LLMs still valuable contributions practice if wisely. administrative tasks, documentation, preliminary topics. even useful defending patients' interests insurance claims (Rosenbluth, designed safeguards prevent One utility rely solely them, implement verification reliable databases (not just web-scrapping). Even concerns "bullshit." connecting trusted database provides cross-referenced sources. system would incorporate mechanism arbitrating evidence, further certain level trustworthiness. integration implemented carefully introducing new forms inadvertently embedding values inconsistent context deployed 2021).Explainable (XAI) increase transparency decision-making, LLMs. Techniques post-hoc explanations generates fields limitation: depend (Titus, Moreover, techniques tracing back underlying fail expose epistemic inability evidence. explanations, therefore, reflect patterns Regulatory frameworks, European Union's Regulation ( 2024) US Blueprint (The House, 2022), establish standards transparency, safety, accountability. adapting meet overcome decision-making. Experts argue refining developing paradigms, neurosymbolic AI, combines neural networks logical gaps.Neurosymbolic offers alternative, integrating adaptability precision enable robust (Hamilton 2024;Wan key offering interpretability. Vivek Wadhwa suggests, nearing developmental ceiling, investment returns. regulators investors explore advancing drive generation innovation, ensuring increased trustworthy reasoning. promise, panacea. faces scalability, handling real-world (Marra reliance structures fully capture nuances probabilistic ambiguous common medicine. Thus, represents incremental advance, oversight, multidisciplinary collaboration, continued innovation essential AI's healthcare.A deep, examination crucial ways safety integrity. fluent, proficiency conceals troubling reality: responses necessarily grounded verified facts consistent logic. field, decision-making paramount, relying flaws presents core, predict training data. mechanism, though powerful generating truth. goal most statistically likely response appropriate one, infiltrating workflows.As underscore, "Responsible implementation continuous monitoring harness minimizing risks." A concern reproducibility. traditional software systems, identical inputs yield same question occasions. unpredictability undermines needed settings, consistency delivering safe Medicine, discipline, cannot afford embrace "epistemic insouciance"-a validity knowledge. problematic given cases, anchored factual reality merely sounds plausible. "hallucination" describe factually incorrect statements trivializes severity problem: medicine-an 1990s-this flaw adoption unreliable compromise integrity care.The standard ChatGPT, warn mistakes. Check important info," insufficient settings. points out "In defence OpenAI, never advertised advisor crowdsourced refinement experiment"; acknowledged mitigation genAI, sparked growing caution amid internet-level hype. sector significant, (especially Hhealthcare professionals) time every piece high-pressure stake margin slim, Entrusting fact-checking giving resources assurances exposes field well arguably ethics dumping, offload downstream Victor, casual use-particularly life-threatening consequences-reflects complacency. Transparency, luxury necessity. Healthcare recommends why arrived conclusions. Explainability building trust informed decisions output. Without "black boxes," accountability justification-an untenable situation decision-making.The amplified current climate, United States. incoming Trump administration expected removal "unnecessary" regulations accelerate (Chalfant, lobbying influential tech organizations BSA | Software Alliance -which companies OpenAI Microsoft-advocate policies reduce regulatory constraints promote adoption. group acknowledges importance international governance standards, removing barriers deprioritizing (such government-imposed mechanisms). President-elect Trump's plans undo previous administration-including management framework foster accountability-signal shift toward (Verma Vynck, perhaps regulation winter. move weaken deploying highstakes healthcare.Given context, systems. Developers, policymakers, institutions collaborate uphold deployment, regardless environment. efforts, exacerbate tendency misleading Trustworthy treated secondary consideration, outcomes lives directly stake.Reframing seen harmless recognizing terminology-it reframing small, occasional mistakes operate. Policymakers, providers, recognize stakes high, rigorous safeguards, erode quality

Язык: Английский

Процитировано

5

From Chalkboards to Chatbots: Revolutionizing Education with AI-Driven Learning Innovations DOI Creative Commons

Dinda Febrianti Putri,

Zohaib Hassan Sain

Educative Jurnal Ilmiah Pendidikan, Год журнала: 2025, Номер 3(1), С. 1 - 10

Опубликована: Янв. 8, 2025

Education is undergoing a major paradigm shift with the emergence of artificial intelligence (AI) as driver for transforming teaching methods. This research explores role AI in education, particularly transition from traditional methods such whiteboards to more interactive learning systems based on chatbots. Using qualitative approach, this collects data through interviews educators, students, and educational observers analysis related literature. The findings show that applying increases student engagement creates personalized, flexible, adaptive experience. One significant result allows students learn at their own pace style, while teachers can focus developing an in-depth curriculum. provides new insights into potential improve quality expand access education digital era inspiring adoption technology various contexts.

Язык: Английский

Процитировано

0

The Integration of Artificial Intelligence (AI) in Educational Setting DOI
Müyesser Ceylan,

Juma Yusuf Mnzile

Advances in finance, accounting, and economics book series, Год журнала: 2025, Номер unknown, С. 395 - 414

Опубликована: Янв. 14, 2025

The incorporation of digital technology into the educational sector marks remarkable change in how teaching and learning systems are operational today. present study targets to scrutinize AI tools, like virtual education platforms smart systems, significantly influence students' understanding attainment. findings point out that artificial intelligence (AI) tools could lead a noteworthy revolution schooling by supporting transformation journeys, thereby enlightening their professions overall academic realization. In conclusion, it's essential be familiar with aspects such ethical concerns, as approaches, technological restrictions when incorporating environments, illustrated study.

Язык: Английский

Процитировано

0

Progettare e valutare con il supporto dell’intelligenza artificiale: elementi per un approccio critico all’uso dei chatbot DOI Creative Commons
Massimo Marcuccio,

Maria Elena Tassinari,

Vanessa Lo Turco

и другие.

Journal of Educational Cultural and Psychological Studies (ECPS Journal), Год журнала: 2025, Номер 30

Опубликована: Янв. 15, 2025

DESIGNING AND ASSESSING WITH THE SUPPORT OF ARTIFICIAL INTELLIGENCE: ELEMENTS FOR A CRITICAL APPROACH TO USE CHATBOTS Abstract This paper explores the critical integration of artificial intelligence (AI), specifically focusing on using chatbots in training design and learning assessment, aiming to uncover both potential challenges educational contexts. Through two exploratory empirical studies – one centered use ChatGPT other its application school assessments analysis examines perceptions teachers students. The findings reveal that chatbots, such as ChatGPT, can significantly reduce workload future designers, improve access resources, provide timely feedback. However, concerns emerge regarding technological dependency superficial learning, with ethical pedagogical implications warrant a examination effectiveness AI tools. concludes by proposing strategies for AI’s thoughtful education, promoting balance between technology reflective, practice.

Язык: Английский

Процитировано

0