Lijie(Derrick) Yang @ICLR (@lijieyyang) 's Twitter Profile
Lijie(Derrick) Yang @ICLR

@lijieyyang

CS Undergrad @CarnegieMellon, incoming CS PhD @Princeton, doing research in ML and Systems

ID: 1613740746124988420

linkhttps://derrickylj.github.io/ calendar_today13-01-2023 03:32:41

9 Tweet

45 Takipçi

177 Takip Edilen

Zhihao Jia (@jiazhihao) 's Twitter Profile Photo

One of the best ways to reduce LLM latency is by fusing all computation and communication into a single GPU megakernel. But writing megakernels by hand is extremely hard. 🚀Introducing Mirage Persistent Kernel (MPK), a compiler that automatically transforms LLMs into optimized

One of the best ways to reduce LLM latency is by fusing all computation and communication into a single GPU megakernel. But writing megakernels by hand is extremely hard.

🚀Introducing Mirage Persistent Kernel (MPK), a compiler that automatically transforms LLMs into optimized
Lijie(Derrick) Yang @ICLR (@lijieyyang) 's Twitter Profile Photo

Officially graduated from CMU School of Computer Science 🎓(Allen Newell Award, Honorable Mention) and thrilled to be starting my PhD at Princeton University with Prof. Ravi Netravali 🚀! Huge thanks to my advisor Mark Stehlik, research mentors Zhihao Jia Tianqi Chen, and amazing CMU Catalyst collaborators!

Officially graduated from <a href="/SCSatCMU/">CMU School of Computer Science</a> 🎓(Allen Newell Award, Honorable Mention) and thrilled to be starting my PhD at <a href="/Princeton/">Princeton University</a> with Prof. Ravi Netravali 🚀! 

Huge thanks to my advisor Mark Stehlik, research mentors <a href="/JiaZhihao/">Zhihao Jia</a> <a href="/tqchenml/">Tianqi Chen</a>, and amazing CMU Catalyst collaborators!
Yixin Dong (@yi_xin_dong) 's Twitter Profile Photo

We’re excited to announce that XGrammar has partnered with Outlines! 🎉 XGrammar is now the grammar backend powering Outlines, enabling structured LLM generation with higher speed. Check out Outlines — an amazing library for LLM structured text generation! 🚀

We’re excited to announce that XGrammar has partnered with Outlines! 🎉
XGrammar is now the grammar backend powering Outlines, enabling structured LLM generation with higher speed.

Check out Outlines — an amazing library for LLM structured text generation! 🚀
Zhihao Jia (@jiazhihao) 's Twitter Profile Photo

The #MLSys2026 submission deadline is only 2 weeks away (Oct 30)! Submit your best work on ML systems — spanning hardware, compilers, software, models, agents, and eval. This year features both Research and Industry Tracks! Join us in Seattle next spring! mlsys.org

Tianqi Chen (@tqchenml) 's Twitter Profile Photo

📢Excited to introduce Apache TVM FFI, an open ABI and FFI for ML systems, enabling compilers, libraries, DSLs, and frameworks to naturally interop with each other. Ship one library across pytorch, jax, cupy etc and runnable across python, c++, rust tvm.apache.org/2025/10/21/tvm…

📢Excited to introduce Apache TVM FFI, an open ABI and FFI for ML systems, enabling compilers, libraries, DSLs, and frameworks to naturally interop with each other. Ship one library across pytorch, jax, cupy etc and runnable across python, c++, rust tvm.apache.org/2025/10/21/tvm…
Princeton Computer Science (@princetoncs) 's Twitter Profile Photo

Congratulations to Tri Dao and Ellen Zhong on being named AI2050 Early Career Fellows by Schmidt Sciences! The AI2050 fellowships fund researchers working to solve hard problems in AI and improve technology for the benefit of humanity by 2050. bit.ly/3WFG6Ny

Congratulations to <a href="/tri_dao/">Tri Dao</a> and <a href="/ZhongingAlong/">Ellen Zhong</a> on being named AI2050 Early Career Fellows by <a href="/schmidtsciences/">Schmidt Sciences</a>!

The AI2050 fellowships fund researchers working to solve hard problems in AI and improve technology for the benefit of humanity by 2050. 

bit.ly/3WFG6Ny
Lijie(Derrick) Yang @ICLR (@lijieyyang) 's Twitter Profile Photo

I will be in San Diego for #NeurIPS2025 from Dec 2 to 7! Feel free to reach out if you are interested in reasoning models, sparse attention, and efficient inference :)

Tri Dao (@tri_dao) 's Twitter Profile Photo

This is what we've been coking for the last 9 months: make MoEs training goes ~2x faster and ~2x less memory! Highlights: - MoE typically takes the most time and memory in modern models. Turns out one can mathematically rewrite the MoE backward pass to reduce the activation mem