Discovering implicit role of tourists’ perceived distance and emotions in decision-making: combing grounded theory and sentiment analysis DOI
Yewei Shang, Montserrat Pallarès-Barberà, Francesc Romagosa

et al.

Current Issues in Tourism, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 20

Published: Oct. 15, 2024

Analysing the attitudes and emotions behind tourists' perceptions of distance provides powerful assistance for destination marketers scholars. However, there is yet to be a universally adopted scale perceived distance. It hard effectively extract from toward destination. This paper identifies critical dimensions voiced by 19 Chinese tourists with grounded analysis, an inductive, comparative, interactive method that captures nuanced information. Advanced techniques linguistic analysis provide opportunity emotional meaning textual data through Latent Dirichlet Allocation (LDA) algorithm. The results identify set appraisal as antecedents With cognitive theory (CAT), different evaluations on these multiple paths eliciting emotion change. findings contrast previous research in decay model, which noted single involving tourism demand. We also find differences based demographic segments. Social network helps further relationship between dimensions. conclude discussing study's implications future studies practice.

Language: Английский

A survey on large language model based autonomous agents DOI Creative Commons
Lei Wang, Chen Ma, Xueyang Feng

et al.

Frontiers of Computer Science, Journal Year: 2024, Volume and Issue: 18(6)

Published: March 22, 2024

Abstract Autonomous agents have long been a research focus in academic and industry communities. Previous often focuses on training with limited knowledge within isolated environments, which diverges significantly from human learning processes, makes the hard to achieve human-like decisions. Recently, through acquisition of vast amounts Web knowledge, large language models (LLMs) shown potential human-level intelligence, leading surge LLM-based autonomous agents. In this paper, we present comprehensive survey these studies, delivering systematic review holistic perspective. We first discuss construction agents, proposing unified framework that encompasses much previous work. Then, overview diverse applications social science, natural engineering. Finally, delve into evaluation strategies commonly used for Based also several challenges future directions field.

Language: Английский

Citations

215

Sequence modeling and design from molecular to genome scale with Evo DOI Creative Commons
Éric Nguyen, Michael Poli, Matthew G. Durrant

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Feb. 27, 2024

The genome is a sequence that completely encodes the DNA, RNA, and proteins orchestrate function of whole organism. Advances in machine learning combined with massive datasets genomes could enable biological foundation model accelerates mechanistic understanding generative design complex molecular interactions. We report Evo, genomic enables prediction generation tasks from to scale. Using an architecture based on advances deep signal processing, we scale Evo 7 billion parameters context length 131 kilobases (kb) at single-nucleotide, byte resolution. Trained prokaryotic genomes, can generalize across three fundamental modalities central dogma biology perform zero-shot competitive with, or outperforms, leading domain-specific language models. also excels multi-element tasks, which demonstrate by generating synthetic CRISPR-Cas complexes entire transposable systems for first time. information learned over predict gene essentiality nucleotide resolution generate coding-rich sequences up 650 kb length, orders magnitude longer than previous methods. multi-modal multi-scale provides promising path toward improving our control multiple levels complexity.

Language: Английский

Citations

52

Give us the Facts: Enhancing Large Language Models With Knowledge Graphs for Fact-Aware Language Modeling DOI
Linyao Yang, Hongyang Chen, Zhao Li

et al.

IEEE Transactions on Knowledge and Data Engineering, Journal Year: 2024, Volume and Issue: 36(7), P. 3091 - 3110

Published: Jan. 31, 2024

Recently, ChatGPT, a representative large language model (LLM), has gained considerable attention. Due to their powerful emergent abilities, recent LLMs are considered as possible alternative structured knowledge bases like graphs (KGs). However, while proficient at learning probabilistic patterns and engaging in conversations with humans, they, previous smaller pre-trained models (PLMs), still have difficulty recalling facts generating knowledge-grounded contents. To overcome these limitations, researchers proposed enhancing data-driven PLMs knowledge-based KGs incorporate explicit factual into PLMs, thus improving performance texts requiring providing more informed responses user queries. This paper reviews the studies on KGs, detailing existing graph enhanced (KGPLMs) well applications. Inspired by KGPLM, this proposes developing graph-enhanced (KGLLMs). KGLLM provides solution enhance LLMs' reasoning ability, opening up new avenues for LLM research.

Language: Английский

Citations

49

Equipping Llama with Google Query API for Improved Accuracy and Reduced Hallucination DOI Creative Commons

Young Hwan Bae,

Hye Rin Kim,

Jae‐Hoon Kim

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: March 6, 2024

Abstract This study investigates the integration of Llama 2 7b large language model (LLM) with Google Query API to enhance its accuracy and reduce hallucination instances. By leveraging real-time internet data, we aimed address limitations static training datasets improve model's performance across various processing tasks. The methodology involved augmenting 7b's architecture incorporate dynamic data retrieval from API, followed by an evaluation impact on reduction using BIG-Bench benchmark. results indicate significant improvements in both reliability, demonstrating effectiveness integrating LLMs external sources. not only marks a substantial advancement capabilities but also raises important considerations regarding bias, privacy, ethical use internet-sourced information. study's findings contribute ongoing discourse enhancing LLMs, suggesting promising direction for future research development artificial intelligence.

Language: Английский

Citations

18

Retrieval augmented generation-driven information retrieval and question answering in construction management DOI
Chengke Wu, Wei Ding, Qiong Jin

et al.

Advanced Engineering Informatics, Journal Year: 2025, Volume and Issue: 65, P. 103158 - 103158

Published: Feb. 6, 2025

Language: Английский

Citations

2

Inductive reasoning in humans and large language models DOI Creative Commons
Simon Jerome Han, Keith Ransom, Andrew Perfors

et al.

Cognitive Systems Research, Journal Year: 2023, Volume and Issue: 83, P. 101155 - 101155

Published: Aug. 9, 2023

The impressive recent performance of large language models has led many to wonder what extent they can serve as general intelligence or are similar human cognition. We address this issue by applying GPT-3.5 and GPT-4 a classic problem in inductive reasoning known property induction. Over two experiments, we elicit judgments on range induction tasks spanning multiple domains. Although struggles capture aspects behaviour, GPT-4, is much more successful: for the most part, its qualitatively matches that humans, only notable exception failure phenomenon premise non-monotonicity. Our work demonstrates allows interesting comparisons between machine provides datasets benchmarks future vein.

Language: Английский

Citations

23

Comparative Evaluation of Commercial Large Language Models on PromptBench: An English and Chinese Perspective DOI Creative Commons
Shiyu Wang, Qian Ouyang, Bing Wang

et al.

Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown

Published: Feb. 27, 2024

Abstract This study embarks on an exploration of the performance disparities observed between English and Chinese in large language models (LLMs), motivated by growing need for multilingual capabilities artificial intelligence systems. Utilizing a comprehensive methodology that includes quantitative analysis model outputs qualitative assessment nuances, research investigates underlying reasons these discrepancies. The findings reveal significant variations LLMs across two languages, with pronounced challenge accurately processing generating text Chinese. gap underscores limitations current handling complexities inherent languages distinct grammatical structures cultural contexts. implications this are far-reaching, suggesting critical development more robust inclusive can better accommodate linguistic diversity. entails not only enrichment training datasets wider array but also refinement architectures to grasp subtleties different Ultimately, contributes ongoing discourse enhancing LLMs, aiming pave way equitable effective tools cater global user base.

Language: Английский

Citations

15

(Ir)rationality and cognitive biases in large language models DOI Creative Commons
Olivia Macmillan-Scott, Mirco Musolesi

Royal Society Open Science, Journal Year: 2024, Volume and Issue: 11(6)

Published: June 1, 2024

Do large language models (LLMs) display rational reasoning? LLMs have been shown to contain human biases due the data they trained on; whether this is reflected in reasoning remains less clear. In paper, we answer question by evaluating seven using tasks from cognitive psychology literature. We find that, like humans, irrationality these tasks. However, way displayed does not reflect that humans. When incorrect answers are given tasks, often ways differ human-like biases. On top of this, reveal an additional layer significant inconsistency responses. Aside experimental results, paper seeks make a methodological contribution showing how can assess and compare different capabilities types models, case with respect reasoning.

Language: Английский

Citations

9

iDesignGPT: large language model agentic workflows boost engineering design DOI Creative Commons
Zhinan Zhang, Songkai Liu,

Yanqing Shen

et al.

Research Square (Research Square), Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 7, 2025

Abstract Engineering design, a cornerstone of technological innovation, faces persistent challenges from the rigidity traditional methods and insufficient responsiveness emerging AI tools to fully address its inherently complex, dynamic, creativity-driven demands. Here we introduce iDesignGPT, novel framework that integrates large language model with established design methodologies enable dynamic multi-agent collaboration for problem refinement, information gathering, space exploration, iterative optimization. By incorporating metrics such as coverage, diversity, novelty, iDesignGPT provides decision-enabling, data-driven insights conceptual engineering evaluation. Our results reveal surpasses benchmark models in generating innovative, modular, rational solutions, particularly exploratory, open-ended scenarios prioritizing creativity adaptability. User studies, involving both students experienced engineers, validate ability uncover hidden requirements, foster creativity, enhance workflow transparency. Collectively, these findings position scalable platform lowers expertise barrier, fosters interdisciplinary collaboration, expands transformative potential AI-assisted design.

Language: Английский

Citations

0

Collaborative Growth: When Large Language Models Meet Sociolinguistics DOI Creative Commons
Dong Nguyen

Language and Linguistics Compass, Journal Year: 2025, Volume and Issue: 19(2)

Published: Feb. 3, 2025

ABSTRACT Large Language Models (LLMs) have dramatically transformed the AI landscape. They can produce remarkable fluent text and exhibit a range of natural language understanding generation capabilities. This article explores how LLMs might be used for sociolinguistic research and, conversely, sociolinguistics contribute to development LLMs. It argues that both areas will benefit from thoughtful, engaging collaboration. Sociolinguists are not merely end users LLMs; they crucial role play in

Language: Английский

Citations

0