Exploring the role of large language models in radiation emergency response DOI Creative Commons
Anirudh Chandra, Abinash Chakraborty

Journal of Radiological Protection, Journal Year: 2024, Volume and Issue: 44(1), P. 011510 - 011510

Published: Feb. 7, 2024

Abstract In recent times, the field of artificial intelligence (AI) has been transformed by introduction large language models (LLMs). These models, popularized OpenAI’s GPT-3, have demonstrated emergent capabilities AI in comprehending and producing text resembling human language, which helped them transform several industries. But its role yet to be explored nuclear industry, specifically managing radiation emergencies. The present work explores LLMs’ contextual awareness, natural interaction, their capacity comprehend diverse queries a emergency response setting. this study we identify different user types specific LLM use-cases Their possible interactions with ChatGPT, popular LLM, also simulated preliminary results are presented. Drawing on insights gained from exercise address concerns reliability misinformation, advocates for expert guided domain-specific LLMs trained safety protocols historical data. This aims guide management practitioners decision-makers effectively incorporating into decision support framework.

Language: Английский

Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT DOI Creative Commons
Augustin Lecler, Loïc Duron, Philippe Soyer

et al.

Diagnostic and Interventional Imaging, Journal Year: 2023, Volume and Issue: 104(6), P. 269 - 274

Published: March 18, 2023

Language: Английский

Citations

280

Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching DOI Creative Commons
Hao Yu

Frontiers in Psychology, Journal Year: 2023, Volume and Issue: 14

Published: June 1, 2023

OPINION article Front. Psychol., 01 June 2023Sec. Educational Psychology Volume 14 - 2023 | https://doi.org/10.3389/fpsyg.2023.1181712

Language: Английский

Citations

222

Catalyzing next-generation Artificial Intelligence through NeuroAI DOI Creative Commons
Anthony M. Zador, G. Sean Escola, Blake A. Richards

et al.

Nature Communications, Journal Year: 2023, Volume and Issue: 14(1)

Published: March 22, 2023

Abstract Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate AI, we must invest fundamental research NeuroAI. A core component this is the embodied Turing test, which challenges AI animal models interact with sensorimotor world at skill levels akin their living counterparts. The test shifts focus from those capabilities like game playing and language are especially well-developed or uniquely human – inherited over 500 million years evolution shared all animals. Building can pass will provide a roadmap for next generation AI.

Language: Английский

Citations

168

The debate over understanding in AI’s large language models DOI Creative Commons
Melanie Mitchell, David C. Krakauer

Proceedings of the National Academy of Sciences, Journal Year: 2023, Volume and Issue: 120(13)

Published: March 21, 2023

We survey a current, heated debate in the artificial intelligence (AI) research community on whether large pretrained language models can be said to understand language-and physical and social situations encodes-in any humanlike sense. describe arguments that have been made for against such understanding key questions broader sciences of arisen light these arguments. contend an extended science developed will provide insight into distinct modes understanding, their strengths limitations, challenge integrating diverse forms cognition.

Language: Английский

Citations

148

The application and challenges of ChatGPT in educational transformation: New demands for teachers' roles DOI Creative Commons

Hao Yu

Heliyon, Journal Year: 2024, Volume and Issue: 10(2), P. e24289 - e24289

Published: Jan. 1, 2024

With the rapid development of information technology, artificial intelligence has demonstrated great potential in promoting educational transformation. In November 2022, release product ChatGPT attracted widespread attention, particularly field education, sparking heated discussions among scholars. As a language processing tool, can not only answer user questions but also complete user-specified tasks and even continuously optimize task performance. However, while possessing powerful features, some shortcomings that need improvement, such as accuracy answering questions, data pollution issues, ethical safety concerns, risk knowledge plagiarism. process school education reform, application brings both opportunities challenges. Moreover, ChatGPT's emergence offers teachers an opportunity to reflect on their professional value sets higher demands for them.

Language: Английский

Citations

106

GPT-4 passes the bar exam DOI Creative Commons
Daniel Katz, Michael James Bommarito, Shang Gao

et al.

Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences, Journal Year: 2024, Volume and Issue: 382(2270)

Published: Feb. 26, 2024

In this paper, we experimentally evaluate the zero-shot performance of GPT-4 against prior generations GPT on entire uniform bar examination (UBE), including not only multiple-choice multistate (MBE), but also open-ended essay exam (MEE) and test (MPT) components. On MBE, significantly outperforms both human test-takers models, demonstrating a 26% increase over ChatGPT beating humans in five seven subject areas. MEE MPT, which have previously been evaluated by scholars, scores an average 4.2/6.0 when compared with much lower for ChatGPT. Graded across UBE components, manner test-taker would be, approximately 297 points, excess passing threshold all jurisdictions. These findings document just rapid remarkable advance large language model generally, potential such models to support delivery legal services society. This article is part theme issue 'A complexity science approach law governance'.

Language: Английский

Citations

78

The emergence of economic rationality of GPT DOI Creative Commons
Yiting Chen, Tracy Xiao Liu, You Shan

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2023, Volume and Issue: 120(51)

Published: Dec. 12, 2023

As large language models (LLMs) like GPT become increasingly prevalent, it is essential that we assess their capabilities beyond processing. This paper examines the economic rationality of by instructing to make budgetary decisions in four domains: risk, time, social, and food preferences. We measure assessing consistency GPT’s with utility maximization classic revealed preference theory. find are largely rational each domain demonstrate higher score than those human subjects a parallel experiment literature. Moreover, estimated parameters slightly different from exhibit lower degree heterogeneity. also scores robust randomness demographic settings such as age gender but sensitive contexts based on frames choice situations. These results suggest potential LLMs good need further understand capabilities, limitations, underlying mechanisms.

Language: Английский

Citations

57

Evaluating large language models for use in healthcare: A framework for translational value assessment DOI Creative Commons
Sandeep Reddy

Informatics in Medicine Unlocked, Journal Year: 2023, Volume and Issue: 41, P. 101304 - 101304

Published: Jan. 1, 2023

The recent focus on Large Language Models (LLMs) has yielded unprecedented discussion of their potential use in various domains, including healthcare. While showing considerable performing human-capable tasks, LLMs have also demonstrated significant drawbacks, generating misinformation, falsifying data, and contributing to plagiarism. These aspects are generally concerning but can be more severe the context As explored for utility healthcare, discharge summaries, interpreting medical records providing advice, it is necessary ensure safeguards around Notably, there must an evaluation process that assesses natural language processing performance translational value. Complementing this assessment, a governance layer accountability public confidence such models. Such framework discussed presented paper.

Language: Английский

Citations

47

Artificial intelligence and qualitative research: The promise and perils of large language model (LLM) ‘assistance’ DOI Creative Commons
John D. Roberts, Max Baker, Jane Andrew

et al.

Critical Perspectives on Accounting, Journal Year: 2024, Volume and Issue: 99, P. 102722 - 102722

Published: Feb. 22, 2024

New large language models (LLMs) like ChatGPT have the potential to change qualitative research by contributing every stage of process from generating interview questions structuring publications. However, it is far clear whether such 'assistance' will enable or deskill and eventually displace researcher. This paper sets out explore implications for recently emerged capabilities LLMs; how they acquired their seemingly 'human-like' 'converse' with us humans, in what ways these are deceptive misleading. Building on a comparison different 'trainings' humans LLMs, first traces human-like qualities LLM human proclivity project communicative intent into onto LLMs' purely imitative capacity predict structure communication. It then goes detail which communication misleading relation absolute 'certainty' LLMs 'converse', intrinsic tendencies 'hallucination' 'sycophancy', narrow conception 'artificial intelligence', complete lack ethical sensibility responsibility, finally feared danger an 'emergence' 'human-competitive' 'superhuman' capabilities. The concludes noting dangers widespread use as 'mediators' self-understanding culture. A postscript offers brief reflection only can do researchers.

Language: Английский

Citations

16

A Bibliometric Analysis of the Rise of ChatGPT in Medical Research DOI Creative Commons
Nikki M. Barrington, Nithin Gupta, Basel Musmar

et al.

Medical Sciences, Journal Year: 2023, Volume and Issue: 11(3), P. 61 - 61

Published: Sept. 17, 2023

The rapid emergence of publicly accessible artificial intelligence platforms such as large language models (LLMs) has led to an equally increase in articles exploring their potential benefits and risks. We performed a bibliometric analysis ChatGPT literature medicine science better understand publication trends knowledge gaps. Following title, abstract, keyword searches PubMed, Embase, Scopus, Web Science databases for published the medical field, were screened inclusion exclusion criteria. Data extracted from included articles, with citation counts obtained PubMed journal metrics Clarivate Journal Citation Reports. After screening, 267 study, most which editorials or correspondence average 7.5 +/- 18.4 citations per publication. Published on authored largely United States, India, China. topics discussed use accuracy research, education, patient counseling. Among non-surgical specialties, radiology ChatGPT-related while plastic surgery among surgical specialties. number top 20 most-cited was 60.1 35.3. journals publications, there 10 3.7 publications. Our results suggest that managing inevitable ethical safety issues arise implementation LLMs will require further research capabilities ChatGPT, generate policies guiding adoption science.

Language: Английский

Citations

36