Haoyu Zhao
@thomaszhao1998
PhD student @Princeton, Research Intern @MSFTResearch. Recently interested in theorem proving.
ID: 3334088338
http://hyzhao.me 19-06-2015 04:53:14
15 Tweet
53 Followers
50 Following
ICML Conference **paper alert** Fine-tuning LLM on a task gives it new skill. Our “Skill localization” paper shows this skill lives in < 0.01% parameters — rest can be reverted to pre-trained values. 1/6 With Nikunj Saunshi,Haoyu Zhao,Sanjeev Arora Link: arxiv.org/abs/2302.06600
Quanta Magazine featured our work on emergence of skill compositionality (and its limitations) in LLMs among the CS breakthroughs of the year. tinyurl.com/5f5jvzy5. Work was done over 2023 Google DeepMind and Princeton PLI. Key pieces: (i) mathematical framework for