Zixuan Zhang (@zhangzxuiuc) 's Twitter Profile
Zixuan Zhang

@zhangzxuiuc

NLP Researcher,
PhD Candidate @ CS UIUC (zhangzx-uiuc.github.io)

ID: 1493395075497398272

calendar_today15-02-2022 01:21:56

6 Tweet

66 Takipรงi

61 Takip Edilen

Zixuan Zhang (@zhangzxuiuc) 's Twitter Profile Photo

This is a brand new framework for LLM knowledge editing (KE) that solves the critical problems of ambiguity in previous KE methods. A very exciting work!

Zixuan Zhang (@zhangzxuiuc) 's Twitter Profile Photo

๐Ÿš€Excited to share our new work at NAACL 2024! arxiv.org/pdf/2404.01652โ€ฆ We study a critical generalization issue of retrieval-augmented generation (RAG) systems - how to maintain the model performance as global knowledge shifts and the background corpus evolves. ๐ŸŒŽโœจ ๐Ÿ”ŽOur key

๐Ÿš€Excited to share our new work at NAACL 2024! arxiv.org/pdf/2404.01652โ€ฆ 
We study a critical generalization issue of retrieval-augmented generation (RAG) systems - how to maintain the model performance as global knowledge shifts and the background corpus evolves. ๐ŸŒŽโœจ

๐Ÿ”ŽOur key
Ke Yang (@empathyang) 's Twitter Profile Photo

๐Ÿ‘พ Introducing AgentOccam: Automating Web Tasks with LLMs! ๐ŸŒ AgentOccam showcases the impressive power of Large Language Models (LLMs) on web tasks, without any in-context examples, new agent roles, online feedback, or search strategies. ๐Ÿ„๐Ÿ„๐Ÿ„ ๐Ÿง™ Link: arxiv.org/abs/2410.13825

๐Ÿ‘พ Introducing AgentOccam: Automating Web Tasks with LLMs! ๐ŸŒ AgentOccam showcases the impressive power of Large Language Models (LLMs) on web tasks, without any in-context examples, new agent roles, online feedback, or search strategies. ๐Ÿ„๐Ÿ„๐Ÿ„
๐Ÿง™ Link: arxiv.org/abs/2410.13825
Jiaxin-Qin (@jr_qjx) 's Twitter Profile Photo

I am at #EMNLP2024! I will present our work "Why Does New Knowledge Create Messy Ripple Effects in LLMs? " on Web 10:30am. Thanks to all the collaborators Heng Ji Zixuan Zhang Chi Han Manling Li Looking forward to have a chat! Paper Link: arxiv.org/pdf/2407.12828

I am at #EMNLP2024!

I will present our work "Why Does New Knowledge Create Messy Ripple Effects in LLMs? " on Web 10:30am. 

Thanks to all the collaborators <a href="/hengjinlp/">Heng Ji</a> <a href="/zhangzxUIUC/">Zixuan Zhang</a> <a href="/Glaciohound/">Chi Han</a> <a href="/ManlingLi_/">Manling Li</a> 

Looking forward to have a chat! 

Paper Link: arxiv.org/pdf/2407.12828
Ke Yang (@empathyang) 's Twitter Profile Photo

๐Ÿ™Œ Happy New Year everyone! ๐Ÿค– New preprint: TinyHelen's First Curriculum: Training and Evaluating Tiny Language Models in a Simpler Language Environment ๐Ÿค– We train and evaluate tiny language models (LMs) using a novel text dataset with systematically simplified vocabularies and

๐Ÿ™Œ Happy New Year everyone!
๐Ÿค– New preprint: TinyHelen's First Curriculum: Training and Evaluating Tiny Language Models in a Simpler Language Environment
๐Ÿค– We train and evaluate tiny language models (LMs) using a novel text dataset with systematically simplified vocabularies and