Yahoo España Búsqueda web

Search results

  1. Hace 5 días · Keqin Bao Jizhi Zhang Yang Zhang Wenjie Wang Fuli Feng Xiangnan He. Computer Science. RecSys. 2023; TLDR. It is demonstrated that the proposed TALLRec framework can significantly enhance the recommendation capabilities of LLMs in the movie and book domains, even with a limited dataset of fewer than 100 samples. Expand. 127 [PDF]

  2. dblp.org › pid › 266dblp: Moxin Li

    Hace 4 días · Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua: Robust Prompt Optimization for Large Language Models Against Distribution Shifts. EMNLP 2023 : 1539-1554

  3. Hace 22 horas · Andy Lau Deanie Ip Qin Hailu Wang Fuli Paul Chun. Cinema; A Simple Life; Tue 28 Wed 29 Thu 30 Fri 31 Sat 1. Format: All 2D 3D IMAX. 98848 (Estimated) Go. Use current location. Tickets.co.uk is your one-stop shop for tickets. Tickets you can trust. Book safely and securely with Tickets.co.uk, an official, recognised and trusted source.

  4. Hace 3 días · Wang et al. (Wang et al., 2024) use LLMs as data augmenters for conventional recommendation systems during training, to improve model performance without additional serving cost. Different from prior work, we focus on directly incorporating LLM-generated content to break the feedback loop, aiming for more diverse and serendipitous recommendations while maintaining efficiency.

  5. Hace 3 días · Keqin Bao, Jizhi Zhang, Yang Zhang, W enjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An e ective and e cient tuning framework to align large language. model with recommendation. arXiv ...

  6. Hace 4 días · Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang: Knowledgeable Salient Span Mask for Enhancing Language Models as Knowledge Base. 444-456. ... Hao Wang, Jing-Jing Zhu, Wei Wei, Heyan Huang, Xian-Ling Mao: FGCS: A Fine-Grained Scientific Information Extraction Dataset in Computer Science Domain. 653-665.

  7. Hace 1 día · The Era of 1-bit LLMs: Training Tips, Code and FAQ Shuming Ma Hongyu Wang Furu Wei BitNet Team https://aka.ms/GeneralAI Abstract We present details and tips for training 1-bit LLMs. We also provide additional experiments and results that were not reported and responses to questions regarding the “The-Era-of-1-bit-LLM” paper [MWM + 24]. ...