Zixuan Chen (@c___eric417) 's Twitter Profile
Zixuan Chen

@c___eric417

Incoming PhD at @UCSanDiego; Bachelor's Degree at @FudanUni

ID: 759926167281422336

calendar_today01-08-2016 01:38:11

53 Tweet

242 Followers

265 Following

RoboHub🤖 (@xrobohub) 's Twitter Profile Photo

Meet GMT: a new framework by the Zixuan Chen team that enables high-fidelity motion tracking on humanoid robots via a single policy trained on large, unstructured human motion datasets.

Xuxin Cheng (@xuxin_cheng) 's Twitter Profile Photo

Coordinating diverse, high-speed motions with a single control policy has been a long-standing challenge. Meet GMT—our universal tracker that keeps up with a whole spectrum of agile movements, all with one single policy.

Zixuan Chen (@c___eric417) 's Twitter Profile Photo

Thanks Xiaolong for summarizing our work. Engineering is quite important in robotics. For motion tracking, even though DeepMimic was proposed by Jason Peng years ago, there are still tons of things to do to make it work on real robot.

Runpei Dong (@runpeidong) 's Twitter Profile Photo

Motion tracking is a hard problem, especially when you want to track a lot of motions with only a single policy. Good to know that MoE distilled student works so well, congrats Zixuan Chen on such exciting results!

Yanjie Ze (@zeyanjie) 's Twitter Profile Photo

Check Zixuan's recent progress on general humanoid controllers! General humanoid controllers are contrary to systems that have multiple skill networks and call each skill separately. Once we have such general controllers, the humanoid intelligence problem can be simply formulated

Generalist (@generalistai_) 's Twitter Profile Photo

Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early

Wenli Xiao (@_wenlixiao) 's Twitter Profile Photo

💡Wow—super dynamic motion controlled by a unified general policy! 🔗 gmt-humanoid.github.io Feels like the recipe for training a general whole-body controller has almost converged: MoE oracle teacher → generalist student policy In our previous research: - HOVER

Haoru Xue (@haoruxue) 's Twitter Profile Photo

Impressive work. Lots of works this year shows good engineering can really demystify WBC. There is no more excuse for crappy policies. Next steps: making WBC policy more accessible, making it easier to interface with vision-language.

Mazeyu Ji (@jimazeyu) 's Twitter Profile Photo

Humanoids have shown incredible capabilities in simulation. What’s missing in the real world is a unified policy that can generalize across all these motions. Now it’s here!!! Use it to power your own tasks and build the next generation of humanoid applications.

Guanya Shi (@guanyashi) 's Twitter Profile Photo

Very impressive! 2025 will be a year going from single-motion agile WBC policy (e.g., ASAP) to versatile & agile & steerable "Behavioral Foundation Models" for humanoids! We will also likely see research combining such models with VLA-style system2 at the end of 2025!

Xuanbin Peng (@xuanbin_peng) 's Twitter Profile Photo

Single generalist policy for tracking diverse, agile humanoid motions! Check out our new paper, GMT—a universal motion tracking framework leveraging Adaptive Sampling and a Motion Mixture-of-Experts architecture to achieve seamless, high-fidelity motion tracking. Thrilled to be

Runpei Dong (@runpeidong) 's Twitter Profile Photo

#RSS2025 Excited to be presenting our HumanUP tomorrow at the Humanoids Session (Sunday, June 22, 2025) 📺 Spotlight talk: 4:30pm–5:30pm, Bovard Auditorium 📜Poster: 6:30pm-8:00pm, #3, Associates Park

Jianglong Ye (@jianglong_ye) 's Twitter Profile Photo

How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.

Haoru Xue (@haoruxue) 's Twitter Profile Photo

🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/

Ruihan Yang (@rchalyang) 's Twitter Profile Photo

How can we leverage diverse human videos to improve robot manipulation? Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling

Takara Truong (@takaratruong) 's Twitter Profile Photo

They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. diffusecloc.github.io/website/

Qiayuan Liao (@qiayuanliao) 's Twitter Profile Photo

Want to achieve extreme performance in motion tracking—and go beyond it? Our preprint tech report is now online, with open-source code available!