
Rujun Han
@hanrujun
Research Scientist @Google working on RAG, LLM evaluation, post-training, and alignment. NLP+ML Ph.D. from @USC_ISI. Ex-@AWS AI, ex-@federalreserve.
ID: 1131657545872027649
https://rujunhan.github.io 23-05-2019 20:25:58
39 Tweet
299 Takipçi
204 Takip Edilen





Had a lot of fun working with Justin Chih-Yao Chen on reverse thinking. We show training using backward questions+reasoning with carefully designed objectives makes LLM better at a variety of reasoning tasks. Check out our paper: arxiv.org/abs/2411.19865.




Can many-shot ICL be cached and still tailored per test sample? We make it possible. 💡 Excited to share that our paper, "Towards Compute-Optimal Many-Shot In-Context Learning," has been accepted to Conference on Language Modeling! Paper: arxiv.org/pdf/2507.16217 #COLM2025 #LLMs #AI #ICL


