Jixuan Leng (@jixuanleng) 's Twitter Profile
Jixuan Leng

@jixuanleng

MSML at @mldcmu | Student Researcher @Google | Former B.S. in Computer Science @UofR

ID: 1238584265350115328

linkhttp://jixuanleng.com calendar_today13-03-2020 21:54:41

18 Tweet

25 Followers

67 Following

Xinyu Yang (@xinyu2ml) 's Twitter Profile Photo

📢 Announcing our new PEFT family S²FT @NeurIPS2024 ❗️❗️❗️ 😀 Join us at our poster presentation in West Ballroom A-D, Booth #7101, on Friday, December 13, from 11 a.m. to 2 p.m. PST. 🚀 Making Sparse Fine-tuning Great Again Compared to LoRA, S²FT offers several key advantages:

📢 Announcing our new PEFT family S²FT @NeurIPS2024 ❗️❗️❗️
😀 Join us at our poster presentation in West Ballroom A-D, Booth #7101, on Friday, December 13, from 11 a.m. to 2 p.m. PST.
🚀 Making Sparse Fine-tuning Great Again

Compared to LoRA, S²FT offers several key advantages:
Langlin Huang (@shrangoh) 's Twitter Profile Photo

New Research Released! 🚀PosS: Position Specialist Generates Better Draft for Speculative Decoding Is your LLM fast enough? PosS consistently improves over current speculative decoding methods by using position-specialized draft layers to generate high-quality drafts! 🔖Paper:

ChengSong Huang (@chengsongh31219) 's Twitter Profile Photo

🚀🚀Excited to share our paper R-Zero: Self-Evolving Reasoning LLM from Zero Data ! How to train LLM without data? R-Zero teaches Large Language Models to reason starting with nothing but a base model. No data required!!! Paper: arxiv.org/abs/2508.05004 Code:

🚀🚀Excited to share our paper R-Zero: Self-Evolving Reasoning LLM from Zero Data !

How to train LLM without data?

R-Zero teaches Large Language Models to reason starting with nothing but a base model. 
No data required!!!
Paper: arxiv.org/abs/2508.05004
Code: