Advancing Life Cycle Assessment of Sustainable Green Hydrogen Production Using Domain-Specific Fine-Tuning by Large Language Models Augmentation DOI Creative Commons

Yajing Chen,

Urs Liebau,

Shreyas Mysore Guruprasad

et al.

Machine Learning and Knowledge Extraction, Journal Year: 2024, Volume and Issue: 6(4), P. 2494 - 2514

Published: Nov. 4, 2024

Assessing the sustainable development of green hydrogen and assessing its potential environmental impacts using Life Cycle Assessment is crucial. Challenges in LCA, like missing data, are often addressed machine learning, such as artificial neural networks. However, to find an ML solution, researchers need read extensive literature or consult experts. This research demonstrates how customised LLMs, trained with domain-specific papers, can help overcome these challenges. By starting small by consolidating papers focused on LCA proton exchange membrane water electrolysis, which produces hydrogen, applications LCA. These uploaded OpenAI create LlamaIndex, enabling future queries. Using LangChain framework, query model (GPT-3.5-turbo), receiving tailored responses. The results demonstrate that LLMs assist providing suitable solutions address data inaccuracies gaps. ability quickly LLM receive integrated response across relevant sources presents improvement over manually retrieving reading individual papers. shows leveraging fine-tuned empower conduct LCAs more efficiently effectively.

Language: Английский

Artificial intelligence and the future of evaluation education: Possibilities and prototypes DOI Creative Commons

Zach Tilton,

John M. LaVelle,

Tian Ford

et al.

New Directions for Evaluation, Journal Year: 2023, Volume and Issue: 2023(178-179), P. 97 - 109

Published: June 1, 2023

Abstract Advancements in Artificial Intelligence (AI) signal a paradigmatic shift with the potential for transforming many various aspects of society, including evaluation education, implications subsequent practice. This article explores AI evaluator and education. Specifically, discusses key issues education equitable language access to navigating program, social science, theory, understanding theorists their philosophies, case studies simulations. The paper then considers how chatbots might address these issues, documents efforts prototype three use cases guidance counselor, teaching assistant, mentor chatbot young emerging evaluations or anyone who wants it. concludes ruminations on additional research activities topics such as best integrate literacy training into existing programs, making strategic linkages practitioners, educators.

Language: Английский

Citations

5

Disrupting evaluation? Emerging technologies and their implications for the evaluation industry DOI Creative Commons
Steffen Bohni Nielsen

New Directions for Evaluation, Journal Year: 2023, Volume and Issue: 2023(178-179), P. 47 - 57

Published: June 1, 2023

Abstract This article surveyed different emerging technologies (ET), in particular artificial intelligence, and their burgeoning application the evaluation industry. Evidence suggests that evaluators have been relatively slow adopting ET practice. However, more recent data suggest adoption is increasing. then analyzed if, how, affect industry The finds program one of several competing forms knowledge production informing decision‐making, particularly government not‐for‐profit sectors. Therefore, faces a number challenges stemming from ET. In this article, it argued must, albeit critically, embrace Most likely, will complement practice and, some instances, displace human tasks.

Language: Английский

Citations

4

Evaluation criteria for artificial intelligence DOI
Bianca Montrosse‐Moorhead

New Directions for Evaluation, Journal Year: 2023, Volume and Issue: 2023(178-179), P. 123 - 134

Published: June 1, 2023

Abstract Criteria identify and define the aspects on which what we evaluate is judged play a central role in evaluation practice. While work use of AI burgeoning, at time writing, set criteria to consider evaluating has not been proposed. As first step this direction, Teasdale's Domains Framework was used as lens through critically read articles included special issue. This resulted identification eight domains for evaluation. Three these relate conceptualization implementation Five are focused outcomes, specifically those stemming from More needed further deliberate possible

Language: Английский

Citations

4

Finding a safe zone in the highlands: Exploring evaluator competencies in the world of AI DOI Open Access
Sarah Mason

New Directions for Evaluation, Journal Year: 2023, Volume and Issue: 2023(178-179), P. 11 - 22

Published: June 1, 2023

Abstract Since the public launch of ChatGPT in November 2022, disciplines across globe have grappled with questions about how emerging artificial intelligence will impact their fields. In this article I explore a set foundational concepts (AI), then apply them to field evaluation broadly, and American Evaluation Association's evaluator competencies more specifically. Given recent developments narrow AI, two potential frameworks for considering which are most likely be impacted—and potentially replaced—by AI tools. Building on Moravec's Landscape Human Competencies Lee's Risk Replacement Matrix create an exploratory Evaluator Evaluation‐Specific help conceptualize may contribute long‐term sustainability field. Overall, argue that interpersonal, contextually‐responsive aspects work—in contrast technical, program management, or methodological field—may least impacted replaced by AI. As such, these we continue emphasize, both day‐to‐day our operations, training new evaluators. This is intended starting point discussions throughout remainder issue.

Language: Английский

Citations

4

Advancing Life Cycle Assessment of Sustainable Green Hydrogen Production Using Domain-Specific Fine-Tuning by Large Language Models Augmentation DOI Creative Commons

Yajing Chen,

Urs Liebau,

Shreyas Mysore Guruprasad

et al.

Machine Learning and Knowledge Extraction, Journal Year: 2024, Volume and Issue: 6(4), P. 2494 - 2514

Published: Nov. 4, 2024

Assessing the sustainable development of green hydrogen and assessing its potential environmental impacts using Life Cycle Assessment is crucial. Challenges in LCA, like missing data, are often addressed machine learning, such as artificial neural networks. However, to find an ML solution, researchers need read extensive literature or consult experts. This research demonstrates how customised LLMs, trained with domain-specific papers, can help overcome these challenges. By starting small by consolidating papers focused on LCA proton exchange membrane water electrolysis, which produces hydrogen, applications LCA. These uploaded OpenAI create LlamaIndex, enabling future queries. Using LangChain framework, query model (GPT-3.5-turbo), receiving tailored responses. The results demonstrate that LLMs assist providing suitable solutions address data inaccuracies gaps. ability quickly LLM receive integrated response across relevant sources presents improvement over manually retrieving reading individual papers. shows leveraging fine-tuned empower conduct LCAs more efficiently effectively.

Language: Английский

Citations

1