Harnessing AI for Understanding Scientific Literature: Innovations and Applications of Chat-Agent System in Battery Recycling Research DOI

Rongfan Liu,

Zhi Zou,

Sihui Chen

et al.

Materials Today Energy, Journal Year: 2025, Volume and Issue: unknown, P. 101818 - 101818

Published: Jan. 1, 2025

Language: Английский

Large language models for reticular chemistry DOI
Zhiling Zheng, Nakul Rampal,

Theo Jaffrelot Inizan

et al.

Nature Reviews Materials, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 31, 2025

Language: Английский

Citations

4

Data Management in a Collaborative Design Architecture, Part A: The Basic Concept of the Underlying Data Management System DOI

Jochen Haug,

Kai Becker,

Matthias Schuff

et al.

AIAA SCITECH 2022 Forum, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 3, 2025

Language: Английский

Citations

2

Crystal structure generation with autoregressive large language modeling DOI Creative Commons
Luis M. Antunes, Keith T. Butler, Ricardo Grau‐Crespo

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: Dec. 6, 2024

Abstract The generation of plausible crystal structures is often the first step in predicting structure and properties a material from its chemical composition. However, most current methods for prediction are computationally expensive, slowing pace innovation. Seeding algorithms with quality generated candidates can overcome major bottleneck. Here, we introduce CrystaLLM, methodology versatile structures, based on autoregressive large language modeling (LLM) Crystallographic Information File (CIF) format. Trained millions CIF files, CrystaLLM focuses through text. produce wide range inorganic compounds unseen training, as demonstrated by ab initio simulations. Our approach challenges conventional representations crystals, demonstrates potential LLMs learning effective models chemistry, which will lead to accelerated discovery innovation materials science.

Language: Английский

Citations

15

AtomGPT: Atomistic Generative Pretrained Transformer for Forward and Inverse Materials Design DOI
Kamal Choudhary

The Journal of Physical Chemistry Letters, Journal Year: 2024, Volume and Issue: 15(27), P. 6909 - 6917

Published: June 27, 2024

Large language models (LLMs) such as generative pretrained transformers (GPTs) have shown potential for various commercial applications, but their applicability materials design remains underexplored. In this Letter, AtomGPT is introduced a model specifically developed based on transformer architectures, demonstrating capabilities both atomistic property prediction and structure generation. This study shows that combination of chemical structural text descriptions can efficiently predict material properties with accuracy comparable to graph neural network models, including formation energies, electronic bandgaps from two different methods, superconducting transition temperatures. Furthermore, generate atomic structures tasks designing new superconductors, the predictions validated through density functional theory calculations. work paves way leveraging LLMs in forward inverse design, offering an efficient approach discovery optimization materials.

Language: Английский

Citations

14

Harnessing GPT-3.5 for text parsing in solid-state synthesis – case study of ternary chalcogenides DOI Creative Commons
Maung Thway, Kai Yuan Andre Low,

Samyak Khetan

et al.

Digital Discovery, Journal Year: 2024, Volume and Issue: 3(2), P. 328 - 336

Published: Jan. 1, 2024

Optimally doped single-phase compounds are necessary to advance state-of-the-art thermoelectric devices which convert heat into electricity and vice versa , requiring solid-state synthesis of bulk materials.

Language: Английский

Citations

12

Machine learning for analyses and automation of structural characterization of polymer materials DOI
Shizhao Lu, Arthi Jayaraman

Progress in Polymer Science, Journal Year: 2024, Volume and Issue: 153, P. 101828 - 101828

Published: May 3, 2024

Language: Английский

Citations

12

Future-proofing geotechnics workflows: accelerating problem-solving with large language models DOI Creative Commons
Stephen Wu, Yu Otake, Daijiro MIZUTANI

et al.

Georisk Assessment and Management of Risk for Engineered Systems and Geohazards, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 18

Published: July 25, 2024

The integration of Large Language Models (LLMs), such as ChatGPT, into the workflows geotechnical engineering has a high potential to transform how discipline approaches problem-solving and decision-making. This paper investigates practical uses LLMs in addressing challenges based on opinions from diverse group, including students, researchers, professionals academia, industry, government sectors gathered workshop dedicated this study. After introducing key concepts LLMs, we present preliminary LLM solutions for four distinct problems illustrative examples. In addition basic text generation ability, each problem is designed cover different extended functionalities that cannot be achieved by conventional machine learning tools, multimodal modelling under unified framework, programming knowledge extraction, embedding. We also address potentials implementing particularly achieving precision accuracy specialised tasks, underscore need expert oversight. findings demonstrate effectiveness enhancing efficiency, data processing, decision-making engineering, suggesting paradigm shift towards more integrated, data-driven field.

Language: Английский

Citations

12

ChatGPT as Research Scientist: Probing GPT’s capabilities as a Research Librarian, Research Ethicist, Data Generator, and Data Predictor DOI
Steven A. Lehr, Aylin Caliskan, S P Liyanage

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2024, Volume and Issue: 121(35)

Published: Aug. 20, 2024

How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components scientific process: as Research Librarian, Ethicist, Data Generator, Novel Predictor, using psychological science testing field. In Study 1 (Research Librarian), unlike human researchers, hallucinated, authoritatively generating fictional references 36.0% 5.4% time, respectively, although exhibited an evolving capacity to acknowledge its fictions. 2 Ethicist), (though not GPT-3.5) proved capable detecting violations like p-hacking in protocols, correcting 88.6% blatantly presented issues, 72.6% subtly issues. 3 (Data Generator), both models consistently replicated patterns cultural bias previously discovered large language corpora, indicating that ChatGPT can simulate known results, antecedent usefulness for data generation skills hypothesis generation. Contrastingly, 4 (Novel Predictor), neither model was successful at predicting new results absent their training data, appeared leverage substantially information when more vs. less novel outcomes. Together, these suggest GPT flawed but rapidly improving librarian, decent ethicist already, simple domains with characteristics poor empirical aid future experimentation.

Language: Английский

Citations

12

A Review of Large Language Models and Autonomous Agents in Chemistry DOI Creative Commons
Mayk Caldas Ramos, Christopher J. Collison, Andrew Dickson White

et al.

Chemical Science, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 9, 2024

Large language models (LLMs) have emerged as powerful tools in chemistry, significantly impacting molecule design, property prediction, and synthesis optimization. This review highlights LLM capabilities these domains their potential to accelerate scientific discovery through automation. We also LLM-based autonomous agents: LLMs with a broader set of interact surrounding environment. These agents perform diverse tasks such paper scraping, interfacing automated laboratories, planning. As are an emerging topic, we extend the scope our beyond chemistry discuss across any domains. covers recent history, current capabilities, design agents, addressing specific challenges, opportunities, future directions chemistry. Key challenges include data quality integration, model interpretability, need for standard benchmarks, while point towards more sophisticated multi-modal enhanced collaboration between experimental methods. Due quick pace this field, repository has been built keep track latest studies: https://github.com/ur-whitelab/LLMs-in-science.

Language: Английский

Citations

12

Beyond Traditional Assessment: Exploring the Impact of Large Language Models on Grading Practices DOI Open Access

Oluwole Fagbohun,

Nwaamaka Pearl Iduwe,

Mustapha Abdullahi

et al.

Journal of Artificial Intelligence Machine Learning and Data Science, Journal Year: 2024, Volume and Issue: 2(1), P. 1 - 8

Published: Feb. 5, 2024

In this study, we examine the transformative role of large language models (LLMs) in redefining educational assessments.Traditional grading systems, characterized by their uniform and often manual approaches, face significant challenges terms scalability, consistency, personalized feedback.The advent LLMs heralds a new era assessment, offering nuanced, scalable, efficient solutions.This study explores integration practices examines potential to revolutionize assessment landscape.We begin analyzing limitations traditional methods emphasizing need for more sophisticated adaptable paper then introduces concept LLMs, outlining advanced capabilities natural processing machine learning, which are pivotal understanding evaluating student responses.We delve into mechanisms process, analyze, grade wide range responses, from short answers complex essays, highlighting ability provide detailed feedback insights beyond mere correctness.The core discussion revolves around real-world applications case studies have been implemented assessments.These include automated systems adaptive testing platforms, showing effectiveness handling diverse intricate responses.The outcomes these implementations were analyzed, demonstrating enhancing accuracy, fairness, efficiency practices.However, is challenging.This critically issues such as biases AI models, data privacy concerns, maintain ethical standards grading.We propose strategies mitigate importance human oversight continuous model refinement.This offers forward-looking perspective on future that use LLMs.We envision paradigm shift towards personalized, fair, methods, facilitated ongoing advancements LLM technologies.The promises insightful approach aligning with broader goals learning equity.

Language: Английский

Citations

10