Wenlong Huang (@wenlong_huang) 's Twitter Profile
Wenlong Huang

@wenlong_huang

PhD Student @StanfordSVL @StanfordAILab. Previously @Berkeley_AI @GoogleDeepMind. Robotics, Foundation Models.

ID: 1125594354515529728

linkhttp://wenlong.page calendar_today07-05-2019 02:53:01

484 Tweet

3,3K Followers

1,1K Following

Mihir Prabhudesai (@mihirp98) 's Twitter Profile Photo

🚨 The era of infinite internet data is ending, So we ask: 👉 What’s the right generative modelling objective when data—not compute—is the bottleneck? TL;DR: ▶️Compute-constrained? Train Autoregressive models ▶️Data-constrained? Train Diffusion models Get ready for 🤿 1/n

🚨 The era of infinite internet data is ending, So we ask:

👉 What’s the right generative modelling objective when data—not compute—is the bottleneck?

TL;DR:

▶️Compute-constrained? Train Autoregressive models

▶️Data-constrained? Train Diffusion models

Get ready for 🤿  1/n
Manling Li (@manlingli_) 's Twitter Profile Photo

Excited that Ruohan Zhang is joining NU Northwestern University Computer Science ! If you are thinking about pursuing a PhD, definitely reach out to him! During my wonderful year at Stanford AI Lab Stanford Vision and Learning Lab, when I was completely new to robotics, he was the nicest person who was incredibly patient

Excited that <a href="/RuohanZhang76/">Ruohan Zhang</a> is joining NU <a href="/northwesterncs/">Northwestern University Computer Science</a> ! If you are thinking about pursuing a PhD, definitely reach out to him!

During my wonderful year at <a href="/StanfordAILab/">Stanford AI Lab</a> <a href="/StanfordSVL/">Stanford Vision and Learning Lab</a>, when I was completely new to robotics, he was the nicest person who was incredibly patient
Neil Nie (@neil_nie_) 's Twitter Profile Photo

Thank you for sharing our work Y Combinator! Please checkout vernerobotics.com to schedule a free pilot to see how our robots can transform your business! - Thank you to (co-founder) Aditya, my research advisors, colleagues, friends and family for your support!

Manling Li (@manlingli_) 's Twitter Profile Photo

🏆Thrilled to receive ACL 2025 Inaugural Dissertation Award Honorable Mention. “Multimodality” has moved incredibly fast that my PhD research already feels like from a different era. It makes me wonder how challenging and anxious for today’s students to choose thesis

🏆Thrilled to receive <a href="/aclmeeting/">ACL 2025</a> Inaugural Dissertation Award Honorable Mention. 

“Multimodality” has moved incredibly fast that my PhD research already feels like from a different era. 

It makes me wonder how challenging and anxious for today’s students to choose thesis
Bardienus Duisterhof (@bduisterhof) 's Twitter Profile Photo

Missed our #RSS workshop on structured world models for robot manipulation?🦾 Or want to rewatch 📷 your favorite talks? We released all recordings on YouTube 👇 youtube.com/playlist?list=…

Jiafei Duan (@djiafei) 's Twitter Profile Photo

Reasoning is central to purposeful action. Today we introduce MolmoAct — a fully open Action Reasoning Model (ARM) for robotics. Grounded in large-scale pre-training with action reasoning data, every predicted action is interpretable and user-steerable via visual trace. We are

Federico Baldassarre (@baldassarrefe) 's Twitter Profile Photo

Say hello to DINOv3 🦖🦖🦖 A major release that raises the bar of self-supervised vision foundation models. With stunning high-resolution dense features, it’s a game-changer for vision tasks! We scaled model size and training data, but here's what makes it special 👇

Say hello to DINOv3 🦖🦖🦖

A major release that raises the bar of self-supervised vision foundation models.
With stunning high-resolution dense features, it’s a game-changer for vision tasks!

We scaled model size and training data, but here's what makes it special 👇
Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

A picture now is worth more than a thousand words in genAI; it can be turned into a full 3D world! And you can stroll in this garden endlessly long, it will still be there.

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Celebrating our exceptional women leaders! 👏 Congratulations to our Founding Co-Director Fei-Fei Li and Senior Fellow Yejin Choi on being recognized in this year’s #TIME100AI Shapers and Thinkers list! Read about their work: time.com/collections/ti… time.com/collections/ti…

Celebrating our exceptional women leaders! 👏

Congratulations to our Founding Co-Director <a href="/drfeifei/">Fei-Fei Li</a> and Senior Fellow <a href="/YejinChoinka/">Yejin Choi</a> on being recognized in this year’s #TIME100AI Shapers and Thinkers list!

Read about their work:
time.com/collections/ti…
time.com/collections/ti…
Zhi Su (@zhisu22) 's Twitter Profile Photo

🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.

Wenlong Huang (@wenlong_huang) 's Twitter Profile Photo

One of the biggest lessons I learned from Fei-Fei Li: do research from and for the “North Star”—general-purpose robots that people want and need. Not just chasing cool demos, but building towards this lasting goal. And BEHAVIOR is the perfect testbed! The 1200-hour whole-body,

Nishanth Kumar (@nishanthkumar23) 's Twitter Profile Photo

World models hold a lot of promise for robotics, but they're data hungry and often struggle with long horizons. We learn models from a few (< 10) human demos that enable a robot to plan in completely novel scenes! Our key idea is to model *symbols* not pixels 👇

Hao-Shu Fang (@haoshu_fang) 's Twitter Profile Photo

How do we unlock the full dexterity of robot hands with data, even beyond what teleoperation can achieve? DEXOP captures natural human manipulation with full-hand tactile & proprio sensing, plus direct force feedback to users, without needing a robot👉dex-op.github.io

Ruohan Zhang (@ruohanzhang76) 's Twitter Profile Photo

Thanks to everyone’s interest in BEHAVIOR so far! We have received several questions, and I am trying to answer some of them here: 1. 📜How are tasks defined in BEHAVIOR? BEHAVIOR tasks are written in BDDL (BEHAVIOR Domain Definition Language). Unlike geometric, image/video, or

Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

ha! here is something fun and totally random I've been pondering: as Oliver Sacks has beautifully written - "what is the space between two snowflakes?" Language can describe all the things, stuff, and people in intricate details. But what about the 'space', the 'nothingness' in

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

🎉 Excited to share that our review paper on learning-based dynamics models for robotic manipulation is finally out in Science Robotics Science Robotics! 🤖 Led by my former mentee Bo Ai, this paper is especially meaningful to me. It builds on the structure of my PhD thesis and

🎉 Excited to share that our review paper on learning-based dynamics models for robotic manipulation is finally out in Science Robotics <a href="/SciRobotics/">Science Robotics</a>! 🤖

Led by my former mentee <a href="/BoAi0110/">Bo Ai</a>, this paper is especially meaningful to me. It builds on the structure of my PhD thesis and
Haonan Chen (@haonanchen_) 's Twitter Profile Photo

What if robots could decide when to see and when to feel like humans? We built a system that lets them. Multi-Modal Policy Consensus learns to balance vision 👁️ and touch ✋. 🌐 Project: policyconsensus.github.io 1/N

Manling Li (@manlingli_) 's Twitter Profile Photo

VLAs, VLMs, LLMs, and Vision Foundation Models for Embodied Agents! There are just so many new updates in recent months! We have updated our tutorial, come and join us if you would like to discuss the latest advances! Room: 306B Time: 1pm-5pm Slides: …models-meet-embodied-agents.github.io

VLAs, VLMs, LLMs, and Vision Foundation Models for Embodied Agents!

There are just so many new updates in recent months!

We have updated our tutorial, come and join us if you would like to discuss the latest advances!

Room: 306B
Time: 1pm-5pm
Slides: …models-meet-embodied-agents.github.io