Enhancing ID-based Recommendation with Large Language Models DOI Open Access
Lei Chen, Chen Gao,

Xiaoyi Du

et al.

ACM transactions on office information systems, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 13, 2024

Large Language Models (LLMs) have recently garnered significant attention in various domains, including recommendation systems. Recent research leverages the capabilities of LLMs to improve performance and user modeling aspects recommender These studies primarily focus on utilizing interpret textual data tasks. However, it's worth noting that ID-based recommendations, is absent, only ID available. The untapped potential for within paradigm remains relatively unexplored. To this end, we introduce a pioneering approach called “LLM Recommendation” (LLM4IDRec). This innovative integrates while exclusively relying data, thus diverging from previous reliance data. basic idea LLM4IDRec by employing LLM augment if augmented can performance, it demonstrates ability effectively, exploring an way integration recommendation. Specifically, first define prompt template enhance LLM's comprehend task. Next, during process generating training using template, develop two efficient methods capture both local global structure We feed generated into employ LoRA fine-tuning LLM. Following phase, utilize fine-tuned generate aligns with users’ preferences. design filtering strategies eliminate invalid Thirdly, merge original creating Finally, input existing models without any modifications model itself. evaluate effectiveness our three widely-used datasets. Our results demonstrate notable improvement consistently outperforming solely augmenting

Language: Английский

On Generative Agents in Recommendation DOI
An Zhang, Yuxin Chen, Leheng Sheng

et al.

Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Journal Year: 2024, Volume and Issue: unknown, P. 1807 - 1817

Published: July 10, 2024

Language: Английский

Citations

19

Tool learning with large language models: a survey DOI
Changle Qu, Sunhao Dai, Xiaochi Wei

et al.

Frontiers of Computer Science, Journal Year: 2025, Volume and Issue: 19(8)

Published: Jan. 13, 2025

Language: Английский

Citations

3

LLaRA: Large Language-Recommendation Assistant DOI
Jiayi Liao, Sihang Li, Zhengyi Yang

et al.

Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Journal Year: 2024, Volume and Issue: unknown, P. 1785 - 1795

Published: July 10, 2024

Citations

8

Reinforced Prompt Personalization for Recommendation with Large Language Models DOI Open Access
Wenyu Mao, Jiancan Wu, Jiawei Chen

et al.

ACM transactions on office information systems, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 4, 2025

Designing effective prompts can empower LLMs to understand user preferences and provide recommendations with intent comprehension knowledge utilization capabilities. Nevertheless, recent studies predominantly concentrate on task-wise prompting, developing fixed prompt templates shared across all users in a given recommendation task ( e.g., rating or ranking). Although convenient, prompting overlooks individual differences, leading inaccurate analysis of interests. In this work, we introduce the concept instance-wise aiming at personalizing discrete for users. Toward end, propose Reinforced Prompt Personalization (RPP) realize it automatically. To improve efficiency quality, RPP personalizes sentence level rather than searching vast vocabulary word-by-word. Specifically, breaks down into four patterns, tailoring patterns based multi-agent combining them. Then personalized interact (environment) iteratively, boost LLMs’ recommending performance (reward). addition RPP, scalability action space, our proposal RPP+ dynamically refines selected actions throughout iterative process. Extensive experiments various datasets demonstrate superiority RPP/RPP+ over traditional recommender models, few-shot methods, other prompt-based underscoring significance recommendation. Our code is available https://github.com/maowenyu-11/RPP .

Language: Английский

Citations

0

A Recommender System for Mining Personalized User Preferences DOI

H. D. Li,

Fei Chen,

H. H. Wang

et al.

Communications in computer and information science, Journal Year: 2025, Volume and Issue: unknown, P. 16 - 33

Published: Jan. 1, 2025

Language: Английский

Citations

0

Unleashing the Power of Large Language Model for Denoising Recommendation DOI
Shuyao Wang, Zhi Zheng, Yongduo Sui

et al.

Published: April 22, 2025

Recommender systems are crucial for personalizing user experiences but often depend on implicit feedback data, which can be noisy and misleading. Existing denoising studies involve incorporating auxiliary information or learning strategies from interaction data. However, they struggle with the inherent limitations of external knowledge as well non-universality certain predefined assumptions, hindering accurate noise identification. Recently, large language models (LLMs) have gained attention their extensive world reasoning abilities, yet potential in enhancing recommendations remains underexplored. In this paper, we introduce LLaRD, a framework leveraging LLMs to improve recommender systems, thereby boosting overall recommendation performance. Specifically, LLaRD generates denoising-related by first enriching semantic insights observational data via inferring user-item preference knowledge. It then employs novel Chain-of-Thought (CoT) technique over graphs reveal relation denoising. Finally, it applies Information Bottleneck (IB) principle align LLM-generated targets, filtering out irrelevant LLM Empirical results demonstrate LLaRD's effectiveness accuracy.

Language: Английский

Citations

0

LLM is Knowledge Graph Reasoner: LLM’s Intuition-Aware Knowledge Graph Reasoning for Cold-Start Sequential Recommendation DOI
Keigo Sakurai, Ren Togo, Takahiro Ogawa

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 263 - 278

Published: Jan. 1, 2025

Language: Английский

Citations

0

Multi-view Intent Learning and Alignment with Large Langue Models for Session-based Recommendation DOI
Shutong Qiao, Wei Zhou, Junhao Wen

et al.

ACM transactions on office information systems, Journal Year: 2025, Volume and Issue: unknown

Published: April 8, 2025

Session-based recommendation (SBR) methods often rely on user behavior data, which can struggle with the sparsity of session limiting performance. Researchers have identified that beyond behavioral signals, rich semantic information in item descriptions is crucial for capturing hidden intent. While large language models (LLMs) offer new ways to leverage this challenges anonymity, short-sequence nature, and high LLM training costs hindered development a lightweight, efficient framework SBR. To address above challenges, we propose an LLM-enhanced SBR integrates signals from multiple views. This two-stage leverages strengths both LLMs traditional while minimizing costs. In first stage, use multi-view prompts infer latent intentions at level, supported by intent localization module alleviate hallucinations. second align unify these inferences representations, effectively merging insights small models. Extensive experiments two real datasets demonstrate LLM4SBR improve model We release our codes along baselines https://github.com/tsinghua-fib-lab/LLM4SBR .

Language: Английский

Citations

0

Enhancing ID-based Recommendation with Large Language Models DOI Open Access
Lei Chen, Chen Gao,

Xiaoyi Du

et al.

ACM transactions on office information systems, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 13, 2024

Large Language Models (LLMs) have recently garnered significant attention in various domains, including recommendation systems. Recent research leverages the capabilities of LLMs to improve performance and user modeling aspects recommender These studies primarily focus on utilizing interpret textual data tasks. However, it's worth noting that ID-based recommendations, is absent, only ID available. The untapped potential for within paradigm remains relatively unexplored. To this end, we introduce a pioneering approach called “LLM Recommendation” (LLM4IDRec). This innovative integrates while exclusively relying data, thus diverging from previous reliance data. basic idea LLM4IDRec by employing LLM augment if augmented can performance, it demonstrates ability effectively, exploring an way integration recommendation. Specifically, first define prompt template enhance LLM's comprehend task. Next, during process generating training using template, develop two efficient methods capture both local global structure We feed generated into employ LoRA fine-tuning LLM. Following phase, utilize fine-tuned generate aligns with users’ preferences. design filtering strategies eliminate invalid Thirdly, merge original creating Finally, input existing models without any modifications model itself. evaluate effectiveness our three widely-used datasets. Our results demonstrate notable improvement consistently outperforming solely augmenting

Language: Английский

Citations

0