Chuan Wen (@chuanwen15) 's Twitter Profile
Chuan Wen

@chuanwen15

PhD student @Tsinghua_IIIS. Visitor @berkeley_ai w/
@pabbeel .

ID: 1345672892395053057

linkhttp://alvinwen428.github.io/ calendar_today03-01-2021 10:06:39

150 Tweet

318 Followers

281 Following

Chuan Wen (@chuanwen15) 's Twitter Profile Photo

After a long journey, our FP3 paper has finally been accepted by ICRA 2026. We firmly believe that 3D observation is key to achieving generalist robots!💪

Ruiqian Nai (@ruiqiannai) 's Twitter Profile Photo

🤖 Can we demonstrate humanoid complex whole-body manipulation skills without a physical robot present? Introducing HuMI: A portable, robot-free interface for learning diverse humanoid manipulation tasks. 📄 arxiv.org/abs/2602.06643 🌐 …noid-manipulation-interface.github.io

Chuan Wen (@chuanwen15) 's Twitter Profile Photo

Excited to share our new work HuMI, a robot-free whole-body manipulation interface! Thanks to every collaborator for your hard work! 🎉🎉🎉

Nicholas Pfaff (@nicholasepfaff) 's Twitter Profile Photo

Meet SceneSmith: An agentic system that generates entire simulation-ready environments from a single text prompt. VLM agents collaborate to build scenes with dozens of objects per room, articulated furniture, and full physics properties. We believe environment generation is no

Chuan Wen (@chuanwen15) 's Twitter Profile Photo

Excited to announce our new work HinFlow accepted by #ICLR2026. With more and more recent works about high-level planners (e.g., video generation, flow prediction, PDDL, etc.), we introduce an online learning paradigm to ground plans to actions via hindsight relabeling.

Ken Goldberg (@ken_goldberg) 's Twitter Profile Photo

The IEEE Transactions on Robot Learning (T-RL) will launch on March 30! Co-EiCs: Todd Murphey and Vincent Vanhoucke ieee-ras.org/publications/t…

Huihan Liu (@huihan_liu) 's Twitter Profile Photo

Catastrophic forgetting has long been a challenge in continual learning. However, our new study found that pretrained Vision-Language-Action (VLA) models are surprisingly resistant to forgetting! Zero forgetting, or even positive backward transfer, is possible with simple

Catastrophic forgetting has long been a challenge in continual learning.

However, our new study found that pretrained Vision-Language-Action (VLA) models are surprisingly resistant to forgetting!

Zero forgetting, or even positive backward transfer, is possible with simple
Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

For a long time, I was skeptical about action-conditioned video prediction for robotics. Many models look impressive, but once you ask them to handle long-horizon manipulation with real physical interaction, things quickly fall apart (e.g., Genie is amazing but mostly focused on

Chuan Wen (@chuanwen15) 's Twitter Profile Photo

Amazing work! Maybe robotics will stay in the copilot stage for a long while, so an appealing topic is how to make the robot copilot system easier and more intuitive to use.

Chuan Wen (@chuanwen15) 's Twitter Profile Photo

Good catch! We’ve actually found that some conclusions from image-based studies don’t transfer to 3D policies. For instance, the common belief from UMI that relative action spaces help generalization doesn’t seem to hold here. In both this paper and FP3, we see that 3D policies