Linqi (Alex) Zhou (@linqi_zhou) 's Twitter Profile
Linqi (Alex) Zhou

@linqi_zhou

Research Scientist @LumaLabsAI. Ph.D. Student at Stanford University (on leave). Prev co-founder @apparatelabs (acq.).

ID: 1157542255000838144

linkhttp://alexzhou907.github.io calendar_today03-08-2019 06:42:34

68 Tweet

718 Followers

284 Following

Yuhui Zhang (@zhang_yu_hui) 's Twitter Profile Photo

Check out our latest work w/ amazing collaborators from Berkeley AI Research! Fun fact: Lisa Dunlap and I met at Bay CV day. We had very similar ICLR23 papers using language to augment visual data and it turned out we were working on the same idea again, so we decided to collaborate :)

Luma AI (@lumalabsai) 's Twitter Profile Photo

Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm

Jiaming Song (@baaadas) 's Twitter Profile Photo

As one of the people who popularized the field of diffusion models, I am excited to share something that might be the “beginning of the end” of it. IMM has a single stable training stage, a single objective, and a single network — all are what make diffusion so popular today.

Linqi (Alex) Zhou (@linqi_zhou) 's Twitter Profile Photo

Thanks Tanishq Mathew Abraham, Ph.D. for sharing our latest work. Our method surpasses diffusion and Flow Matching while being trained stably from scratch. Checkout our blog post: lumalabs.ai/news/inductive…

amit (@gravicle) 's Twitter Profile Photo

"Pre-training as we know it will end, Data is not growing". Limited text data is blocking the path to useful general intelligence. At Luma we are building the mathematical foundations to solve this problem by making video, audio and language multimodal data useful for training.

"Pre-training as we know it will end, Data is not growing". Limited text data is blocking the path to useful general intelligence. 
At Luma we are building the mathematical foundations to solve this problem by making video, audio and language multimodal data useful for training.
Linqi (Alex) Zhou (@linqi_zhou) 's Twitter Profile Photo

Excited to announce that IMM is accepted as an oral for ICML. As I’ll be going to CVPR as well, if you’d like to chat about research see you at Luma AI open bar event.

Meihua Dang (@meihuadang) 's Twitter Profile Photo

#CVPR2025 "Personalized Preference Fine-tuning of Diffusion Models". We extend DPO to align text-to-image diffusion models with individual user preferences. At test time, it generalizes to unseen users from just few-shot examples — moving toward pluralistic alignment.

#CVPR2025 "Personalized Preference Fine-tuning of Diffusion Models".
We extend DPO to align text-to-image diffusion models with individual user preferences. At test time, it generalizes to unseen users from just few-shot examples — moving toward pluralistic alignment.
Allen Nie (🇺🇦☮️) (@allen_a_nie) 's Twitter Profile Photo

Decision-making with LLM can be studied with RL! Can an agent solve a task with text feedback (OS terminal, compiler, a person) efficiently? How can we understand the difficulty? We propose a new notion of learning complexity to study learning with language feedback only. 🧵👇

Decision-making with LLM can be studied with RL! Can an agent solve a task with text feedback (OS terminal, compiler, a person) efficiently? How can we understand the difficulty? We propose a new notion of learning complexity to study learning with language feedback only. 🧵👇