Rujun Han (@hanrujun) 's Twitter Profile
Rujun Han

@hanrujun

Research Scientist @Google working on RAG, LLM evaluation, post-training, and alignment. NLP+ML Ph.D. from @USC_ISI. Ex-@AWS AI, ex-@federalreserve.

ID: 1131657545872027649

linkhttps://rujunhan.github.io calendar_today23-05-2019 20:25:58

39 Tweet

299 Takipçi

204 Takip Edilen

Rujun Han (@hanrujun) 's Twitter Profile Photo

Ready to take knowledge distillation for LLMs to the next level? Check out our Speculative Knowledge Distillation paper which leverages samples composed of the best student and teacher tokens to achieve SOTA results. Collaboration with our wonderful intern Wenda Xu!

Rujun Han (@hanrujun) 's Twitter Profile Photo

Another #EMNLP2024 paper from the collaboration with my previous colleagues at AWS AI. Please join our oral presentation to learn more about how to mitigate the trade-off between LLM instruction following and grounding.

Violet Peng (@violetnpeng) 's Twitter Profile Photo

This Thanksgiving, I’m deeply grateful for opportunity to run for NAACL Board Member alongside so many incredible candidates who share a passion for making a difference! I hope to earn your support as we work together to shape the future of our community! #NLProc #Gratitude

This Thanksgiving, I’m deeply grateful for opportunity to run for NAACL Board Member alongside so many incredible candidates who share a passion for making a difference! 
I hope to earn your support as we work together to shape the future of our community! #NLProc #Gratitude
Rujun Han (@hanrujun) 's Twitter Profile Photo

Had a lot of fun working with Justin Chih-Yao Chen on reverse thinking. We show training using backward questions+reasoning with carefully designed objectives makes LLM better at a variety of reasoning tasks. Check out our paper: arxiv.org/abs/2411.19865.

Justin Chih-Yao Chen (@cyjustinchen) 's Twitter Profile Photo

Happy to share that RevThink has been accepted to #NAACL2025 main conference! 🎉We also release the code and data 👇🧵 RevThink shows that LLMs can also benefit from reverse thinking (like we often do) 👉13.53% gains on 12 datasets (including MATH, ARC, ANLI, etc) + sample

Justin Chih-Yao Chen (@cyjustinchen) 's Twitter Profile Photo

I will be presenting ✨Reverse Thinking Makes LLMs Stronger Reasoners✨at #NAACL2025 ! We show that LLM can also benefit from reverse thinking -- a technique we often use to reason from a problem to a solution: - Improvements across 12 datasets - Outperforms SFT with 10x more

Chen-Yu Lee (@chl260) 's Twitter Profile Photo

Thrilled to introduce "𝗗𝗲𝗲𝗽 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿 𝘄𝗶𝘁𝗵 𝗧𝗲𝘀𝘁-𝗧𝗶𝗺𝗲 𝗗𝗶𝗳𝗳𝘂𝘀𝗶𝗼𝗻," a new deep research agent designed to mimic the iterative nature of human research, complete with cycles of planning, drafting, and revision. 🚀🚀 arxiv.org/pdf/2507.16075

Thrilled to introduce "𝗗𝗲𝗲𝗽 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿 𝘄𝗶𝘁𝗵 𝗧𝗲𝘀𝘁-𝗧𝗶𝗺𝗲 𝗗𝗶𝗳𝗳𝘂𝘀𝗶𝗼𝗻," a new deep research agent designed to mimic the iterative nature of human research, complete with cycles of planning, drafting, and revision. 🚀🚀

arxiv.org/pdf/2507.16075
Shahriar Golchin (@shahriargolchin) 's Twitter Profile Photo

Can many-shot ICL be cached and still tailored per test sample? We make it possible. 💡 Excited to share that our paper, "Towards Compute-Optimal Many-Shot In-Context Learning," has been accepted to Conference on Language Modeling! Paper: arxiv.org/pdf/2507.16217 #COLM2025 #LLMs #AI #ICL

Can many-shot ICL be cached and still tailored per test sample?

We make it possible. 💡

Excited to share that our paper, "Towards Compute-Optimal Many-Shot In-Context Learning," has been accepted to <a href="/COLM_conf/">Conference on Language Modeling</a>!

Paper: arxiv.org/pdf/2507.16217

#COLM2025 #LLMs #AI #ICL
Yumo Xu (@yumo_xu) 's Twitter Profile Photo

Excited to share our #ACL2025NLP paper, "𝐂𝐢𝐭𝐞𝐄𝐯𝐚𝐥: 𝐏𝐫𝐢𝐧𝐜𝐢𝐩𝐥𝐞-𝐃𝐫𝐢𝐯𝐞𝐧 𝐂𝐢𝐭𝐚𝐭𝐢𝐨𝐧 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐒𝐨𝐮𝐫𝐜𝐞 𝐀𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧"! 📜 If you’re working on RAG, Deep Research and Trustworthy AI, this is for you. Why? Citation quality is

Excited to share our #ACL2025NLP paper, "𝐂𝐢𝐭𝐞𝐄𝐯𝐚𝐥: 𝐏𝐫𝐢𝐧𝐜𝐢𝐩𝐥𝐞-𝐃𝐫𝐢𝐯𝐞𝐧 𝐂𝐢𝐭𝐚𝐭𝐢𝐨𝐧 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐒𝐨𝐮𝐫𝐜𝐞 𝐀𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧"! 📜 If you’re working on RAG, Deep Research and Trustworthy AI, this is for you. Why? Citation quality is